VISSL is a computer VIsion library for state-of-the-art Self-Supervised Learning research with PyTorch. VISSL aims to accelerate research cycle in self-supervised learning: from designing a new self-supervised task to evaluating the learned representations. Key features include:
Reproducible implementation of SOTA in Self-Supervision: All existing SOTA in Self-Supervision are implemented - SwAV, SimCLR, MoCo(v2), PIRL, NPID, NPID++, DeepClusterV2, ClusterFit, RotNet, Jigsaw. Also supports supervised trainings.
Benchmark suite: Variety of benchmarks tasks including linear image classification (places205, imagenet1k, voc07), full finetuning, semi-supervised benchmark, nearest neighbor benchmark, object detection (Pascal VOC and COCO).
Model Zoo: Over 60 pre-trained self-supervised model weights.
Get started with VISSL by trying one of the Colab tutorial notebooks.
MODEL | TOP 1 ACCURACY | ~FLOPS | PAPER | YEAR | |
---|---|---|---|---|---|
|
DeepClusterV2 |
75.18%
|
4 Billion
|
||
|
SwAV |
77.03%
|
|
||
|
MoCo-v2 |
66.4%
|
4 Billion
|
||
|
SimCLR |
73.84%
|
|
||
|
ClusterFit |
53.63%
|
4 Billion
|
||
|
PIRL |
70.9%
|
|
||
|
NPID++ |
62.73%
|
|
||
|
Jigsaw |
53.09%
|
4 Billion
|
||
|
Colorization |
49.24%
|
4 Billion
|
||
|
ResNet Semi-supervised |
79.2%
|
4 Billion
|
||
|
ResNet Semi-weakly supervised |
81.06%
|
4 Billion
|
||
|
DeepCluster |
37.88%
|
715 Million
|
||
|
NPID |
54.99%
|
4 Billion
|
||
|
RotNet |
54.89%
|
4 Billion
|
||
|
ResNet Supervised |
77.21%
|
16 Billion
|