• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

References

1 
Y. U. Kim, 2020, Artificial Intelligence and Power Systems, Trans of the KIEE, Vol. 69, No. 7, pp. 24-30DOI
2 
Guehyun Lee, Weon-Goo Kim, 2015, Emotion recognition using pitch parameters of speech, Journal of the Korean Institute of Intelligent Systems, Vol. 25, No. 3, pp. 272-278DOI
3 
Mehmet Berkehan Akçay, Kaya Oğuz, 2020, Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers, Speech Communication, Vol. 116, pp. 56-76DOI
4 
Saranya Rajan, 2019, Facial expression recognition techniques: a comprehensive survey, IET Image Processing, Vol. 13, No. 7, pp. 1031-1040DOI
5 
Nazmi Sofian Suhaimi, James Mountstephens, Jason Teo, 2020, EEG-based emotion recognition: A state-of-the-art review of current trends and opportunities, Computational intelligence and neuroscienceDOI
6 
Hyeun-Joo Go, Dae-Jong Lee, Myung-Geun Chun, 2004, An Emotion Recognition Method using Facial Expression and Speech Signal, Journal of Korea Information Science Society (KISS), Vol. 31, No. 6, pp. 799-807DOI
7 
Carlos Busso, 2004, Analysis of emotion recognition using facial expressions, speech and multimodal information, Proceedings of the 6th international conference on Multimodal interfaces, pp. 205-211DOI
8 
Sung-Woo Byun, Seok-Pil Lee, 2016, Emotion recognition using tone and tempo based on voice for IoT, The transactions of The Korean Institute of Electrical Engineers, Vol. 65, No. 1, pp. 116-121DOI
9 
Gi-duk Kim, Mi-sook Kim, Hack-man Lee, 2021, Speech emotion recognition through time series classification, Proceedings of the Korean Society of Computer Information Conference. Korean Society of Computer Information, pp. 11-13DOI
10 
Eui-Hwan Han, Hyung-Tai Cha, 2017, A Novel Method for Modeling Emotional Dimensions using Expansion of Russell's Model, Science of Emotion and Sensibility, Vol. 20, No. 1, pp. 75-82DOI
11 
K. J. Noh, H. Jeong, , KEMDy19, https://nanum.etri.re.kr/share/kjnoh/KEMDy19?lang=ko_KRGoogle Search
12 
Felix Burkhardt, 2005, A database of German emotional speech, Interspeech, Vol. 5, pp. 1517-1520DOI
13 
Ju-Hee Kim, Seok-Pil Lee, 2021, Multi-modal emotion recognition using speech features and text embedding, Trans. Korean Inst. Electr. Eng, Vol. 70, pp. 108-113DOI
14 
Euisun Choi, Chulhee Lee, 2003, Feature extraction based on the Bhattacharyya distance, Pattern Recognition, Vol. 36, No. 8, pp. 1703-1709DOI
15 
Je-Seong Park, 2019, Development of Display Panel Mura Detection Algorithm Using Regression Analysis and Mahalanobis Distance, Master's thesis, Hoseo University Graduate SchoolGoogle Search
16 
JUNWOO TAK, 2021, Private Information Retrieval with Information Leakage under KL Divergence and JS DivergenceGoogle Search