Mobile QR Code QR CODE
Title Adaptive Feature Generation for Speech Emotion Recognition
Authors (Euihwan Han) ; (Hyungtai Cha)
DOI https://doi.org/10.5573/IEIESPC.2020.9.3.185
Page pp.185-192
ISSN 2287-5255
Keywords Sentiment analysis; Machine learning; Speech feature; Feature generation
Abstract The issue of emotion recognition has received considerable critical attention in artificial intelligence and machine learning. In sentiment analysis fields, researchers recognize emotional states from speech, electroencephalograms, and images, etc. The speech signal is among the most widely used in emotion recognition. There are many speech features, including pitch, energy, linear prediction coefficients, mel-frequency cepstral coefficients, and the Teager energy operator. In this study, we explore the critical speech features for sentiment analysis. We modify our previous feature-generation method, which applies low-variance filtering and principal component analysis (for grouped features) to identify the features. We do not utilize between-class scatter here, but rather, the between class?scatter and within class?scatter ratio. Grouping is achieved according to the number of features?not correlation values. For an objective evaluation, we use the Ryerson Audio-Visual Database of Emotional Speech and Song, with a performance evaluation conducted in terms of classifier accuracy and computational complexity. Finally, we propose an effective feature-generation method to find the critical features for emotion recognition from speech.