Referring Image Matting
3 papers with code • 0 benchmarks • 0 datasets
Extracting the meticulous alpha matte of the specific object from the image that can best match the given natural language description, e.g., a keyword or a expression.
Benchmarks
These leaderboards are used to track progress in Referring Image Matting
Subtasks
Most implemented papers
Referring Image Matting
Different from conventional image matting, which either requires user-defined scribbles/trimap to extract a specific foreground object or directly extracts all the foreground objects in the image indiscriminately, we introduce a new task named Referring Image Matting (RIM) in this paper, which aims to extract the meticulous alpha matte of the specific object that best matches the given natural language description, thus enabling a more natural and simpler instruction for image matting.
Deep Image Matting: A Comprehensive Survey
Image matting refers to extracting precise alpha matte from natural images, and it plays a critical role in various downstream applications, such as image editing.
Matting Anything
In this paper, we propose the Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.