Hallucination
318 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Hallucination
Libraries
Use these libraries to find Hallucination models and implementationsMost implemented papers
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature.
ReAct: Synergizing Reasoning and Acting in Language Models
While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e. g. chain-of-thought prompting) and acting (e. g. action plan generation) have primarily been studied as separate topics.
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models
Our comprehensive case studies within HallusionBench shed light on the challenges of hallucination and illusion in LVLMs.
Im2Flow: Motion Hallucination from Static Images for Action Recognition
Second, we show the power of hallucinated flow for recognition, successfully transferring the learned motion into a standard two-stream network for activity recognition.
Pushing the Limits of Low-Resource Morphological Inflection
Recent years have seen exceptional strides in the task of automatic morphological inflection generation.
On hallucinations in tomographic image reconstruction
The behavior of different reconstruction methods under the proposed formalism is discussed with the help of the numerical studies.
Dataset Distillation via Factorization
In this paper, we study \xw{dataset distillation (DD)}, from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i. e. without an external database.
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts.
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning.