VNLA
1 papers with code • 0 benchmarks • 0 datasets
Find objects in photorealistic environments by requesting and executing language subgoals.
Benchmarks
These leaderboards are used to track progress in VNLA
No evaluation results yet. Help compare methods by
submitting
evaluation metrics.
Most implemented papers
Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention
We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.