Learning representations of affect from speech

Дата: 
20/11/15
Авторы: 
Sayan Ghosh Eugene Laksana Louis-Philippe Morency Stefan Scherer
Аннотация: 
There has been a lot of prior work on representation learning for speech recognition applications, but not much emphasis has been given to an investigation of effective representations of affect from speech, where the paralinguistic elements of speech are separated out from the verbal content. In this paper, we explore denoising autoencoders for learning paralinguistic attributes i.e. categorical and dimensional affective traits from speech. We show that the representations learnt by the bottleneck layer of the autoencoder are highly discriminative of activation intensity and at separating out negative valence (sadness and anger) from positive valence (happiness). We also learn utterance specific representations by a combination of denoising autoencoders and LSTM based recurrent autoencoders. Emotion classification is performed with the learnt temporal/dynamic representations to evaluate the quality of the representations. Experiments on a well-established real-life speech dataset (IEMOCAP) show that the learnt representations are comparable to state of the art feature extractors (such as voice quality features and MFCCs) at emotion and dimensional affect recognition
Описание: 

В статье описываются инструменты определения эмоций из речи. Практический смысл пока низок.