Download PDFOpen PDF in browserAudio Sentiment Analysis by Heterogeneous Signal Features Learned from Utterance-Based Parallel Neural NetworkEasyChair Preprint 668, version 218 pages•Date: December 13, 2018AbstractAudio Sentiment Analysis is a popular research area which extends the text-based sentiment analysis to depend on effectiveness of acoustic features extracted from speech. However, current progress on audio sentiment analysis mainly focuses on extracting homogeneous acoustic features or doesn't fuse heterogeneous features effectively. In this paper, we propose an utterance-based deep neural network model, which has a parallel combination of CNN and LSTM based network, to obtain representative features termed Audio Sentiment Vector (ASV), that can maximally reflect sentiment information in an audio. Specifically, our model is trained by utterance-level labels and ASV can be extracted and fused creatively from two branches. In the CNN model branch, spectrum graphs produced by signals are fed as inputs while in the LSTM model branch, inputs include spectral centroid, MFCC and other recognized traditional acoustic features extracted from dependent utterances in an audio. Besides, BiLSTM with attention mechanism is used for feature fusion. Extensive experiments have been conducted to show our model can recognize audio sentiment precisely, and demonstrate our ASV are better than traditional acoustic features or vectors extracted from other deep learning models. Furthermore, experimental results indicate that the proposed model outperforms state-of-the-art approaches by 9.33% on MOSI. Keyphrases: Audio Sentiment Analysis, feature fusion, signal processing
|