universal meta-learning
1 papers with code • 0 benchmarks • 0 datasets
Meta-learning without meta-training on the training set or fine-tuning on the support set.
Benchmarks
These leaderboards are used to track progress in universal meta-learning
No evaluation results yet. Help compare methods by
submitting
evaluation metrics.
Most implemented papers
Context-Aware Meta-Learning
Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning.