16 papers with code • 11 benchmarks • 8 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find models and implementations

Most implemented papers

Deep Residual Learning for Image Recognition

tensorflow/models CVPR 2016

Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

yanx27/Pointnet_Pointnet2_pytorch NeurIPS 2017

By exploiting metric space distances, our network is able to learn local features with increasing contextual scales.

Llama 2: Open Foundation and Fine-Tuned Chat Models

facebookresearch/llama 18 Jul 2023

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.

Anomaly Detection via Reverse Distillation from One-Class Embedding

hq-deng/RD4AD CVPR 2022

Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD). The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD.

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

autonomousvision/transfuser CVPR 2021

How should representations from complementary sensors be integrated for autonomous driving?

Visual Spatial Reasoning

cambridgeltl/visual-spatial-reasoning 30 Apr 2022

Spatial relations are a basic part of human cognition.

MTet: Multi-domain Translation for English and Vietnamese

vietai/mTet 11 Oct 2022

We introduce MTet, the largest publicly available parallel corpus for English-Vietnamese translation.

A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling

ray075hl/Bi-Model-Intent-And-Slot NAACL 2018

The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model.

Compositional Learning of Image-Text Query for Image Retrieval

ecom-research/ComposeAE 19 Jun 2020

In this paper, we investigate the problem of retrieving images from a database based on a multi-modal (image-text) query.

Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing

bionlu-coling2024/biomed-ner-intent_detection 31 Jul 2020

In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.