Homography Estimation
52 papers with code • 4 benchmarks • 7 datasets
Homography estimation is a technique used in computer vision and image processing to find the relationship between two images of the same scene, but captured from different viewpoints. It is used to align images, correct for perspective distortions, or perform image stitching. In order to estimate the homography, a set of corresponding points between the two images must be found, and a mathematical model must be fit to these points. There are various algorithms and techniques that can be used to perform homography estimation, including direct methods, RANSAC, and machine learning-based approaches.
Most implemented papers
SuperPoint: Self-Supervised Interest Point Detection and Description
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision.
Deep Image Homography Estimation
We present a deep convolutional neural network for estimating the relative homography between a pair of images.
Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model
Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring.
CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
MAGSAC: marginalizing sample consensus
A method called, sigma-consensus, is proposed to eliminate the need for a user-defined inlier-outlier threshold in RANSAC.
UnsuperPoint: End-to-end Unsupervised Interest Point Detector and Descriptor
In this work, we introduce an unsupervised deep learning-based interest point detector and descriptor.
Neural Outlier Rejection for Self-Supervised Keypoint Learning
By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.
DUT: Learning Video Stabilization by Simply Watching Unstable Videos
In this paper, we attempt to tackle the video stabilization problem in a deep unsupervised learning manner, which borrows the divide-and-conquer idea from traditional stabilizers while leveraging the representation power of DNNs to handle the challenges in real-world scenarios.
Deep Homography Estimation in Dynamic Surgical Scenes for Laparoscopic Camera Motion Extraction
We perform an extensive evaluation of state-of-the-art (SOTA) Deep Neural Networks (DNNs) across multiple compute regimes, finding our method transfers from our camera motion free da Vinci surgery dataset to videos of laparoscopic interventions, outperforming classical homography estimation approaches in both, precision by 41%, and runtime on a CPU by 43%.
ALIKE: Accurate and Lightweight Keypoint Detection and Descriptor Extraction
The reprojection loss is then proposed to directly optimize these sub-pixel keypoints, and the dispersity peak loss is presented for accurate keypoints regularization.