Situation Recognition
9 papers with code • 1 benchmarks • 0 datasets
Situation Recognition aims to produce the structured image summary which describes the primary activity (verb), and its relevant entities (nouns).
Most implemented papers
Collaborative Transformers for Grounded Situation Recognition
To implement this idea, we propose Collaborative Glance-Gaze TransFormer (CoFormer) that consists of two modules: Glance transformer for activity classification and Gaze transformer for entity estimation.
Commonly Uncommon: Semantic Sparsity in Situation Recognition
Semantic sparsity is a common challenge in structured visual classification problems; when the output space is complex, the vast majority of the possible predictions are rarely, if ever, seen in the training set.
Situation Recognition: Visual Semantic Role Labeling for Image Understanding
This paper introduces situation recognition, the problem of producing a concise summary of the situation an image depicts including: (1) the main activity (e. g., clipping), (2) the participating actors, objects, substances, and locations (e. g., man, shears, sheep, wool, and field) and most importantly (3) the roles these participants play in the activity (e. g., the man is clipping, the shears are his tool, the wool is being clipped from the sheep, and the clipping is in a field).
Situation Recognition with Graph Neural Networks
We address the problem of recognizing situations in images.
Grounded Situation Recognition
We introduce Grounded Situation Recognition (GSR), a task that requires producing structured semantic summaries of images describing: the primary activity, entities engaged in the activity with their roles (e. g. agent, tool), and bounding-box groundings of entities.
Attention-Based Context Aware Reasoning for Situation Recognition
However, existing query-based reasoning methods have not considered handling of inter-dependent queries which is a unique requirement of semantic role prediction in SR.
Grounded Situation Recognition with Transformers
Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image.
Rethinking the Two-Stage Framework for Grounded Situation Recognition
Since each verb is associated with a specific set of semantic roles, all existing GSR methods resort to a two-stage framework: predicting the verb in the first stage and detecting the semantic roles in the second stage.
ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
Situation Recognition is the task of generating a structured summary of what is happening in an image using an activity verb and the semantic roles played by actors and objects.