Mobile QR Code QR CODE : The Transactions P of the Korean Institute of Electrical Engineers
The Transactions P of the Korean Institute of Electrical Engineers

Korean Journal of Air-Conditioning and Refrigeration Engineering

ISO Journal TitleTrans. P of KIEE
  • Indexed by
    Korea Citation Index(KCI)

References

1 
P. Gangamohan, S.R. Kadiri and B. Yegnanarayana, “Analysis of Emotional Speech—A Review,” Toward Robotic Socially Believable Behaving Systems, vol. I, pp. 205-238, Mar. 2016.DOI
2 
R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar and T. Alhussain, “Speech Emotion Recognition Using Deep Learning Techniques: A Review,” in IEEE Access, vol. 7, pp. 117327-117345, Aug. 2019.DOI
3 
J. D. Lope, M. Graña, “An ongoing review of speech emotion recognition,” Neurocomputing, vol. 528, pp.1-11, Apr. 2023.DOI
4 
T. Anvarjon, Mustaqeem, and K. Soonil, “Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features,” Sensors vol. 20, no. 18, pp. 1-16, Sep. 2020.DOI
5 
P. Jiang, H. Fu, H. Tao, P. Lei and L. Zhao, “Parallelized Convolutional Recurrent Neural Network With Spectral Features for Speech Emotion Recognition,” in IEEE Access, vol. 7, pp. 90368-90377, Jul. 2019.DOI
6 
R. Nagase, T. Fukumori and Y. Yamashita, “Speech Emotion Recognition Using Label Smoothing Based on Neutral and Anger Characteristics,” 2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech), pp. 626- 627, Apr. 2022.DOI
7 
H. Zhang, R. Gou, J Shang, F. Shen, Y. Wu and G. Dai, “Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition,” Front. Physiol, vol. 12, no. 643202, pp. 1-13, Mar. 2021.DOI
8 
W. Zhang and Y. Jia, “A Study on Speech Emotion Recognition Model Based on Mel-Spectrogram and CapsNet,” 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), pp. 231-235, Feb. 2022.DOI
9 
S. Han et al., “Speech Emotion Recognition with a ResNet-CNN-Transformer Parallel Neural Network,” 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), pp. 803-807, May. 2021.DOI
10 
S. Kakuba and D. S. Han, “Speech Emotion Recognition using Context-Aware Dilated Convolution Network,” 2022 27th Asia Pacific Conference on Communications (APCC), pp. 601-604, Nov. 2022.DOI
11 
B. Liang, S. D. Iwnicki and Y. Zhao, “Application of power spectrum cepstrum higher order spectrum and neural network analyses for induction motor fault diagnosis,” Mech. Syst. Signal Process., vol. 39, no. 1, pp. 342-360, Aug. 2013.DOI
12 
Z. K. Abdul and A. K. Al-Talabani, “Mel Frequency Cepstral Coefficient and its Applications: A Review,” in IEEE Access, vol. 10, pp. 122136-122158, Nov. 2022.DOI
13 
X. Zhao, D. Wang, “Analyzing noise robustness of MFCC and GFCC features in speaker identification,” In Proceedings of the 2013 IEEE international conference on acoustics, speech and signal processing, pp. 7204–7208, Oct. 2013.DOI
14 
A.G. Katsiamis, E.M. Drakakis, R.F. Lyon, “Practical gammatone- like filters for auditory processing,” EURASIP J. Audio Speech Music Process, pp. 1-15, Dec. 2007.URL
15 
F. S. Matikolaie, C. Tadj, “On the use of long-term features in a newborn cry diagnostic system,” Biomedical Signal Processing and Control, vol. 59, no. 101889, pp. 1-9, May. 2020.DOI
16 
Z. Khalilzad, Tadj, “Using CCA-Fused Cepstral Features in a Deep Learning-Based Cry Diagnostic System for Detecting an Ensemble of Pathologies in Newborns,” Diagnostics(Basel), vol. 13, no. 5, pp. 1-24, Feb. 2023.DOI
17 
M. Haghighat, M. Abdel-Mottaleb, W. Alhalabi, “Fully automatic face normalization and single sample face recognition in unconstrained environments,” Expert Systems with Applications, vol. 47, pp. 23-34, Apr. 2016.DOI
18 
J. Kim, M. Hyun, I. Chung and N. Kwak, “Feature Fusion for Online Mutual Knowledge Distillation,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4619-4625, May. 2021.DOI
19 
AI-Hub emotion classification dataset, online available: https:// www.aihub.or.kr/aihubdata/data/view.do?currMenu=120&topMenu=100&dataSetSn=259&aihubDataSe=extrldata.URL