MULTI-VIEW LEARNING
51 papers with code • 0 benchmarks • 1 datasets
Multi-View Learning is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view.
Source: Dissimilarity-based representation for radiomics applications
Benchmarks
These leaderboards are used to track progress in MULTI-VIEW LEARNING
Libraries
Use these libraries to find MULTI-VIEW LEARNING models and implementationsMost implemented papers
Neural News Recommendation with Attentive Multi-View Learning
In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.
Trusted Multi-View Classification
To this end, we propose a novel multi-view classification method, termed trusted multi-view classification, which provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction
As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained.
Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification
The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.
Variational Distillation for Multi-View Learning
Information Bottleneck (IB) based multi-view learning provides an information theoretic principle for seeking shared information contained in heterogeneous data descriptions.
Learning Autoencoders with Relational Regularization
A new algorithmic framework is proposed for learning autoencoders of data distributions.
Deep brain state classification of MEG data
The experimental results of cross subject multi-class classification on the studied MEG dataset show that the inclusion of attention improves the generalization of the models across subjects.
COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction
In this paper, we study two challenging problems in incomplete multi-view clustering analysis, namely, i) how to learn an informative and consistent representation among different views without the help of labels and ii) how to recover the missing views from data.
Trusted Multi-View Classification with Dynamic Evidential Fusion
With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Conditional Random Field Autoencoders for Unsupervised Structured Prediction
We introduce a framework for unsupervised learning of structured predictors with overlapping, global features.