Download PDFOpen PDF in browser

Machine Speech Chain with Emotion Recognition

EasyChair Preprint 15151

6 pagesDate: September 28, 2024

Abstract

Developing natural speech recognition and speech synthesis systems requires speech data that authentically represents real emotions. However, this type of data is often challenging to obtain. Machine speech chain offers a solution to this challenge by using unpaired data to continue training models initially trained with paired data. Given the relative abundance of unpaired data compared to paired data, machine speech chain can be instrumental in recognizing emotions in speech where training data is limited. This study investigates the application of machine speech chain in speech emotion recognition and speech recognition of emotional speech. Our findings indicate that a model trained with 50% paired neutral emotion speech data and 22% paired non-neutral emotional speech data shows a reduction in Character Error Rate (CER) from 37.55% to 34.52% when further trained with unpaired neutral emotion speech data. The CER further decreases to 33.75% when additionally trained with combined unpaired speech data. The accuracy of recognizing non-neutral emotions ranged from 2.18% to 53.51%, though the F1 score fluctuated, increasing by up to 20.6% and decreasing by up to 23.4%. These results suggest that the model demonstrates a bias towards the majority class, as reflected by the values of the two metrics.

Keyphrases: machine speech chain, speech emotion recognition, speech recognition, unpaired data

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15151,
  author    = {Akeyla Pradia Naufal and Dessi Puji Lestari and Ayu Purwarianti and Kurniawati Azizah and Dipta Tanaya and Sakriani Sakti},
  title     = {Machine Speech Chain with Emotion Recognition},
  howpublished = {EasyChair Preprint 15151},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser