Instruction Following
262 papers with code • 1 benchmarks • 14 datasets
Libraries
Use these libraries to find Instruction Following models and implementationsDatasets
Most implemented papers
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations.
Habitat: A Platform for Embodied AI Research
We present Habitat, a platform for research in embodied artificial intelligence (AI).
QLoRA: Efficient Finetuning of Quantized LLMs
Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99. 3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.
Visual Instruction Tuning
Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field.
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions -- training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model.
Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction
We propose to decompose instruction execution to goal prediction and action generation.
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following
We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.
WizardLM: Empowering Large Language Models to Follow Complex Instructions
In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset.