Classification
3240 papers with code • 35 benchmarks • 125 datasets
Classification is the task of categorizing a set of data into predefined classes or groups. The aim of classification is to train a model to correctly predict the class or group of new, unseen data. The model is trained on a labeled dataset where each instance is assigned a class label. The learning algorithm then builds a mapping between the features of the data and the class labels. This mapping is then used to predict the class label of new, unseen data points. The quality of the prediction is usually evaluated using metrics such as accuracy, precision, and recall.
Libraries
Use these libraries to find Classification models and implementationsSubtasks
- Text Classification
- Graph Classification
- Audio Classification
- Medical Image Classification
- Medical Image Classification
- Plant Phenotyping
- Morphology classification
- Classifier calibration
- Multi-modal Classification
- Learning with coarse labels
- Episode Classification
- Phishing Website Detection
- Underwater Acoustic Classification
- quantum circuit classification (classical ML)
- quantum circuit classification (quantum ML)
- noisy quantum circuit classification (quantum ML, error mitigation)
- Sensitivity Classification
- noisy quantum circuit classification (quantum ML, error mitigation))
Most implemented papers
Deep Residual Learning for Image Recognition
Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
YOLOv3: An Incremental Improvement
At 320x320 YOLOv3 runs in 22 ms at 28. 2 mAP, as accurate as SSD but three times faster.
Very Deep Convolutional Networks for Large-Scale Image Recognition
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting.
Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited.
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network.
Searching for MobileNetV3
We achieve new state of the art results for mobile classification, detection and segmentation.
Convolutional Pose Machines
Pose Machines provide a sequential prediction framework for learning rich implicit spatial models.
A ConvNet for the 2020s
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
Xception: Deep Learning with Depthwise Separable Convolutions
We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution).