Few-Shot 3D Point Cloud Classification
25 papers with code • 8 benchmarks • 1 datasets
Libraries
Use these libraries to find Few-Shot 3D Point Cloud Classification models and implementationsMost implemented papers
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Point cloud is an important type of geometric data structure.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
By exploiting metric space distances, our network is able to learn local features with increasing contextual scales.
Dynamic Graph CNN for Learning on Point Clouds
Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices.
PointCNN: Convolution On $\mathcal{X}$-Transformed Points
The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN.
Masked Autoencoders for Point Cloud Self-supervised Learning
Then, a standard Transformer based autoencoder, with an asymmetric design and a shifting mask tokens operation, learns high-level latent features from unmasked point patches, aiming to reconstruct the masked point patches.
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
By fine-tuning on downstream tasks, Point-M2AE achieves 86. 43% accuracy on ScanObjectNN, +3. 36% to the second-best, and largely benefits the few-shot classification, part segmentation and 3D object detection with the hierarchical pre-training scheme.
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages.
Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms.
Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models
To conquer this limitation, we propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.