Instruction Following

262 papers with code • 1 benchmarks • 14 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Instruction Following models and implementations
2 papers
10,918
2 papers
10,918
See all 7 libraries.

Most implemented papers

AlpaGasus: Training A Better Alpaca with Fewer Data

gpt4life/alpagasus 17 Jul 2023

Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data.

L-Eval: Instituting Standardized Evaluation for Long Context Language Models

openlmlab/leval 20 Jul 2023

Recently, there has been growing interest in extending the context length of large language models (LLMs), aiming to effectively process long inputs of one turn or conversations with more extensive histories.

InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4

waltonfuture/InstructionGPT-4 23 Aug 2023

To achieve this, we first propose several metrics to access the quality of multimodal instruction data.

ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers

kuleshov-group/llmtools 28 Sep 2023

We propose a memory-efficient finetuning algorithm for large language models (LLMs) that supports finetuning LLMs with 65B parameters in 2/3/4-bit precision on as little as one 24GB GPU.

Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic

hiyouga/llama-factory 19 Feb 2024

We demonstrate the effectiveness of RESTA in both parameter-efficient and full fine-tuning, covering a wide range of downstream tasks, including instruction following in Chinese, English, and Hindi, as well as problem-solving capabilities in Code and Math.

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

qizekun/ShapeLLM 27 Feb 2024

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.

The Replica Dataset: A Digital Replica of Indoor Spaces

facebookresearch/Replica-Dataset 13 Jun 2019

We introduce Replica, a dataset of 18 highly photo-realistic 3D indoor scene reconstructions at room and building scale.

Language as an Abstraction for Hierarchical Deep Reinforcement Learning

google-research/clevr_robot_env NeurIPS 2019

We find that, using our approach, agents can learn to solve to diverse, temporally-extended tasks such as object sorting and multi-object rearrangement, including from raw pixel observations.

Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions

jhu-lcsr/good_robot Conference On Robot Learning (CoRL) 2021

Our model completes block manipulation tasks with synthetic commands 530 more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning.

DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following

xfgao/dialfred 27 Feb 2022

Language-guided Embodied AI benchmarks requiring an agent to navigate an environment and manipulate objects typically allow one-way communication: the human user gives a natural language command to the agent, and the agent can only follow the command passively.