KIEE
The Transactions of
the Korean Institute of Electrical Engineers
KIEE
Contact
Open Access
Monthly
ISSN : 1975-8359 (Print)
ISSN : 2287-4364 (Online)
http://www.tkiee.org/kiee
Mobile QR Code
The Transactions of the Korean Institute of Electrical Engineers
ISO Journal Title
Trans. Korean. Inst. Elect. Eng.
Main Menu
Main Menu
최근호
Current Issue
저널소개
About Journal
논문집
Journal Archive
편집위원회
Editorial Board
윤리강령
Ethics Code
논문투고안내
Instructions to Authors
연락처
Contact Info
논문투고·심사
Submission & Review
Journal Search
Home
Archive
2025-08
(Vol.74 No.08)
10.5370/KIEE.2025.74.8.1426
Journal XML
XML
PDF
INFO
REF
References
1
M. Palatucci, D. Pomerleau, G. E. Hinton and T. M. Mitchell, “Zero-Shot Learning with Semantic Output Codes,” Advances in Neural Information Processing Systems 22, pp. 1410-1418, 2009. DOI:10.5555/2984093.2984252
2
G. Yang, Z. Ye, R. Zhang and K. Huang, “A Comprehensive Survey of Zero-Shot Image Classification: Methods, Implementation and Fair Evaluation,” Applied Computing and Intelligence, vol. 2, no. 1, pp. 1-31, 2022. DOI:10.3934/aci.2022001
3
A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato and T. Mikolov, “DeViSE: A Deep Visual-Semantic Embedding Model,” Advances in Neural Information Processing Systems 26, pp. 2121-2129, 2013. DOI:10.5555/2999792.2999849
4
Y. Guo, H. Zhang, Y. Wong, L. Nie and M. Kankanhalli, “ELIP: Efficient Language-Image Pre-Training with Fewer Vision Tokens,” arXiv preprint arXiv:2309.16738, 2023. DOI:10.48550/arXiv.2309.16738
5
W. Wang, V. W. Zheng, H. Yu and C. Miao, “A Survey of Zero-Shot Learning: Settings, Methods, and Applications,” ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, Art. 13, pp. 1-37, 2019. DOI:10.1145/3293318.
6
S. El Maachi, A. Chehri and R. Saadane, “Zero-Shot-Learning for Plant Species Classification,” Procedia Computer Science, vol. 246, pp. 734-742, 2024. DOI:10.1016/j.procs.2024.09.492
7
P. Kumar, J. Mathew, R. K. Sanodiya and T. Setty, “Zero-Shot Plant Disease Classification with Semantic Attributes,” Artificial Intelligence Review, vol. 57, Art. 305, 2024. DOI:10.1007/s10462-024-10950-9
8
C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham et al., “Scaling Up Visual and Vision-Language Representation Learning with Noisy Text Supervision (ALIGN),” Proc. 38th Int. Conf. on Machine Learning, PMLR 139, pp. 4904-4916, 2021. DOI:10.48550/arXiv.2102.05918
9
J. Yu, Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini and Y. Wu, “CoCa: Contrastive Captioners Are Image-Text Foundation Models,” arXiv preprint arXiv:2205.01917, 2022. DOI:10.48550/arXiv.2205.01917
10
J. Li, D. Li, C. Xiong and S. C. H. Hoi, “BLIP: Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation,” Int. Conf. on Machine Learning (ICML), Proc. Machine Learning Research, vol. 162, pp. 12888-12900, 2022. DOI:10.48550/arXiv.2201.12086
11
A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal et al., “Learning Transferable Visual Models from Natural Language Supervision,” Proc. 38th Int. Conf. on Machine Learning, pp. 8748-8763, 2021. DOI:10.48550/arXiv.2103.00020
12
X. Zhai, B. Mustafa, A. Kolesnikov and L. Beyer, “Sigmoid Loss for Language-Image Pre-Training,” Proc. IEEE/CVF Int. Conf. on Computer Vision (ICCV), Paris, France, pp. 11975-11986, 2023. DOI:10.1109/ICCV51070.2023.01100
13
I. M. Alabdulmohsin, X. Zhai, A. Kolesnikov and L. Beyer, “Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design,” Advances in Neural Information Processing Systems 36, pp. 16406-16425, 2023. DOI:10.5555/3666122.3666844
14
M. Tschannen, A. Gritsenko, X. Wang, M. F. Naeem, I. Alabdulmohsin, N. Parthasarathy et al., “SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features,” arXiv preprint arXiv:2502.14786, 2025. DOI:10.48550/arXiv.2502.14786
15
ESAC Platform of Jeju National University, accessed 25 May 2025. [Online]. Available: http://esac.jejunu.ac.kr