Emotion Recognition
458 papers with code • 7 benchmarks • 45 datasets
Emotion Recognition is an important area of research to enable effective human-computer interaction. Human emotions can be detected using speech signal, facial expressions, body language, and electroencephalography (EEG). Source: Using Deep Autoencoders for Facial Expression Recognition
Libraries
Use these libraries to find Emotion Recognition models and implementationsDatasets
Subtasks
Most implemented papers
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations
We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations.
Multimodal Speech Emotion Recognition and Ambiguity Resolution
In this work, we adopt a feature-engineering based approach to tackle the task of speech emotion recognition.
Multimodal Speech Emotion Recognition Using Audio and Text
Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.
Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication.
Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts
Emotion cause extraction (ECE), the task aimed at extracting the potential causes behind certain emotions in text, has gained much attention in recent years due to its wide applications.
DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition
Specifically, we first modify the recurrence mechanism of XLNet from segment-level to utterance-level in order to better model the conversational data.
Training Deep Neural Networks on Noisy Labels with Bootstrapping
On MNIST handwritten digits, we show that our model is robust to label corruption.
DeXpression: Deep Convolutional Neural Network for Expression Recognition
The proposed architecture achieves 99. 6% for CKP and 98. 63% for MMI, therefore performing better than the state of the art using CNNs.
Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
Previous research in this field has exploited the expressiveness of tensors for multimodal representation.
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis
Therefore, in this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both multi-feature fusion and multi-modality fusion to improve the accuracy of audio-text sentiment analysis.