• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
Q. Jin, C. Li, S. Chen, H. Wu, 2015, Speech emotion recognition with acoustic and lexical features, IEEE International Conference on Acoustics, Vol. speech and signal processing (icassp), pp. 4749-4753DOI
2 
H. S. Kumbhar, S. U. Bhandari, 2019, Speech Emotion Recognition using MFCC features and LSTM network, International Conference On Computing, Communication, Vol. control and automation, pp. 1-3DOI
3 
N. Jain, S. Kumar, A. Kumar, P. Shamsolmoali, M. Zareapoor, 2018, Hybrid deep neural networks for face emotion recognition, Pattern Recognition Letters, Vol. 115, pp. 101-106DOI
4 
D. Shin, D. Shin, D. Shin, 2017, Development of emotion recognition interface using complex EEG/ECG bio-signal for interactive contents, Multimedia Tools and Applications, Vol. 76, No. 9, pp. 11449-11470DOI
5 
J. Zhao, X. Mao, L. Chen, 2019, Speech emotion recognition using deep 1D & 2D CNN LSTM networks, Biomedical Signal Processing and Control, Vol. 47, pp. 312-323DOI
6 
H. Kun, Y. Dong, T. Ivan, 2014, Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine, Interspeech 2014, Vol. , No. , pp. -Google Search
7 
K. Ko, D. Shin, K. Sim, 2009, Development of Context Awarenedd and Service Reasoning Technique for Handi- capped people, Korean Institute of Intelligent Systems, Vol. 19, No. 1, pp. 34-39DOI
8 
Y. Huang, J. Yang, P. Liao, J. Pan, 2017, Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition, Computational Intelligence and Neuroscience, Vol. 2017, pp. 1-8DOI
9 
Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, et al., 2017, Tacotron: Towards End-to-End Speech Synthesis, Interspeech, pp. 4006-4010Google Search
10 
S. Byun, S. Lee, 2016, Emotion Recognition Using Tone and Tempo Based on Voice for IoT, The Transactions of the Korean Institute of Electrical Engineers, Vol. 65, No. 1, pp. 116-121DOI
11 
K. Park, 2018, KSS Dataset : Korean single speaker speech dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset/Google Search
12 
S. Yoon, S. Byun, K. Jung, 2018, Multimodal Speech Emotion Recognition Using Audio and Text, IEEE Spoken Lan- guage Technology Workshop (SLT), pp. 112-118DOI
13 
B. T. Atmaja, K. Shirai, M. Akagi, 2019, Speech Emotion Recognition Using Speech Feature and Word Embedding, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference(APSIPA ASC), pp. 519-523DOI
14 
F. Eyben, M. Wöllmer, B. Schuller, 2010, Opensmile: the munich versatile and fast open-source audio feature extrac- tor, In Proceedings of the 18th ACM international con- ference on Multimedia (MM '10), Vol. association for com- puting machinery, pp. 1459-1462DOI
15 
S. Bird, E. Loper, 2004, Nltk: the natural language toolkit, In Proceedings of the ACL on Interactive poster and demonstration sessions, Vol. , No. , pp. 214-217Google Search
16 
J. Pennington, R. Socher, C. Manning, 2014, Glove: Global vectors for word representation, In Proceedings of the conference on empirical methods in natural language pro- cessing (EMNLP), Vol. 14, pp. 1532-1543Google Search