LLaMA is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 99 | 13.54% |
Large Language Model | 58 | 7.93% |
Question Answering | 32 | 4.38% |
Quantization | 26 | 3.56% |
Text Generation | 26 | 3.56% |
In-Context Learning | 23 | 3.15% |
Instruction Following | 22 | 3.01% |
Retrieval | 20 | 2.74% |
Code Generation | 18 | 2.46% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |