Zero-Shot Transfer 3D Point Cloud Classification
10 papers with code • 3 benchmarks • 2 datasets
Libraries
Use these libraries to find Zero-Shot Transfer 3D Point Cloud Classification models and implementationsMost implemented papers
Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms.
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.
PointCLIP: Point Cloud Understanding by CLIP
On top of that, we design an inter-view adapter to better extract the global feature and adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in 2D.
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection.
Uni3D: Exploring Unified 3D Representation at Scale
Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language.
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training
To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.
ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding
Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets.
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
ViT-Lens: Initiating Omni-Modal Exploration through 3D Insights
A well-trained lens with a ViT backbone has the potential to serve as one of these foundation models, supervising the learning of subsequent modalities.
Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training
Contrastive learning has emerged as a promising paradigm for 3D open-world understanding, i. e., aligning point cloud representation to image and text embedding space individually.