Affordance Recognition
6 papers with code • 2 benchmarks • 1 datasets
Affordance recognition from Human-Object Interaction
Most implemented papers
Visual Compositional Learning for Human-Object Interaction Detection
The integration of decomposition and composition enables VCL to share object and verb features among different HOI samples and images, and to generate new interaction samples and new types of HOI, and thus largely alleviates the long-tail distribution problem and benefits low-shot or zero-shot HOI detection.
Affordance Transfer Learning for Human-Object Interaction Detection
The proposed method can thus be used to 1) improve the performance of HOI detection, especially for the HOIs with unseen objects; and 2) infer the affordances of novel objects.
Discovering Human-Object Interaction Concepts via Self-Compositional Learning
Therefore, the proposed method enables the learning on both known and unknown HOI concepts.
Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks
Unfortunately, the top performing affordance recognition methods use object category priors to boost the accuracy of affordance detection and segmentation.
Detecting Human-Object Interaction via Fabricated Compositional Learning
With the proposed object fabricator, we are able to generate large-scale HOI samples for rare and unseen categories to alleviate the open long-tailed issues in HOI detection.
Fine-grained Affordance Annotation for Egocentric Hand-Object Interaction Videos
Object affordance is an important concept in hand-object interaction, providing information on action possibilities based on human motor capacity and objects' physical property thus benefiting tasks such as action anticipation and robot imitation learning.