Mobile QR Code
Title Unveiling Security Threats in Split Learning with Text-based Data Inference Attacks
Authors 이세종(Sejong Lee) ; 김정인(Jungin Kim) ; 김유신(Yushin Kim) ; 조성현(Sunghyun Cho)
DOI https://doi.org/10.5573/ieie.2024.61.7.15
Page pp.15-23
ISSN 2287-5026
Keywords Split-learning; Privacy-preserving; Data security; Inference attack
Abstract As artificial intelligence technology advances, the demand for extensive data and computing resources for training has significantly increased. Despite these advancements, technology development faces challenges due to limited computing resources and privacy concerns related to data ownership. To address these issues, distributed learning technologies have emerged, with split learning gaining significant attention as a method that allows collaborative model training without sharing raw data. However, recent studies have revealed security vulnerabilities in split learning, where inference attacks using smashed data can secretly extract sensitive information from clients. This paper discusses the risks associated with the security vulnerabilities of split learning and proposes a text-based inference attack framework to demonstrate these vulnerabilities. The proposed attack technique involves the server, acting as an attacker, performing vector matching between the client-side smashed data and a reference smashed dataset to infer the client’s local data. In this paper demonstrates the vulnerabilities in split learning models, including those handling text data, and underscores the need for enhanced data privacy mechanisms to strengthen privacy protection in distributed learning environments.