Surgical phase recognition
15 papers with code • 2 benchmarks • 2 datasets
The first 40 videos are used for training, the last 40 videos are used for testing.
Most implemented papers
TeCNO: Surgical Phase Recognition with Multi-Stage Temporal Convolutional Networks
Automatic surgical phase recognition is a challenging and crucial task with the potential to improve patient safety and become an integral part of intra-operative decision-support systems.
Not End-to-End: Explore Multi-Stage Architecture for Online Surgical Phase Recognition
To address the problem, we propose a new non end-to-end training strategy and explore different designs of multi-stage architecture for surgical phase recognition task.
Learning from a tiny dataset of manual annotations: a teacher/student approach for surgical phase recognition
Vision algorithms capable of interpreting scenes from a real-time video stream are necessary for computer-assisted surgery systems to achieve context-aware behavior.
Multi-Task Recurrent Convolutional Network with Correlation Loss for Surgical Video Analysis
Mutually leveraging both low-level feature sharing and high-level prediction correlating, our MTRCNet-CL method can encourage the interactions between the two tasks to a large extent, and hence can bring about benefits to each other.
Trans-SVNet: Accurate Phase Recognition from Surgical Videos via Hybrid Embedding Aggregation Transformer
In this paper, we introduce, for the first time in surgical workflow analysis, Transformer to reconsider the ignored complementary effects of spatial and temporal features for accurate surgical phase recognition.
LensID: A CNN-RNN-Based Framework Towards Lens Irregularity Detection in Cataract Surgery Videos
In particular, we propose (I) an end-to-end recurrent neural network to recognize the lens-implantation phase and (II) a novel semantic segmentation network to segment the lens and pupil after the implantation phase.
Exploring Segment-level Semantics for Online Phase Recognition from Surgical Videos
Automatic surgical phase recognition plays a vital role in robot-assisted surgeries.
Less is More: Surgical Phase Recognition from Timestamp Supervision
Our study uncovers unique insights of surgical phase recognition with timestamp supervisions: 1) timestamp annotation can reduce 74% annotation time compared with the full annotation, and surgeons tend to annotate those timestamps near the middle of phases; 2) extensive experiments demonstrate that our method can achieve competitive results compared with full supervision methods, while reducing manual annotation cost; 3) less is more in surgical phase recognition, i. e., less but discriminative pseudo labels outperform full but containing ambiguous frames; 4) the proposed UATD can be used as a plug and play method to clean ambiguous labels near boundaries between phases, and improve the performance of the current surgical phase recognition methods.
Free Lunch for Surgical Video Understanding by Distilling Self-Supervisions
Our key insight is to distill knowledge from publicly available models trained on large generic datasets4 to facilitate the self-supervised learning of surgical videos.
Dissecting Self-Supervised Learning Methods for Surgical Computer Vision
Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7. 4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%.