WORLD SCI-TECH R&D ›› 2026, Vol. 48 ›› Issue (1): 69-79. doi: 10.16507/j.issn.1006-6055.2025.12.005 cstr: 32308.14.1006-6055.2025.12.005
Previous Articles
WEI Xia1 ZHANG Wenjun2
Published:
Abstract: With the widespread application of large language models (LLMs) across various fields, issues related to privacy governance and their own risks have become increasingly prominent. This paper systematically explores the dual nature of LLMs in privacy protection: on one hand, LLMs, as intelligent tools, can enhance data security capabilities, such as improving the accuracy of code vulnerability detection tasks; on the other hand, they face typical privacy attacks, including gradient leakage, membership inference, and personal identity information disclosure, posing significant privacy risks. Based on China’s legal framework for cyberspace governance, this paper reviews the compliance applications of LLMs in privacy protection according to the structure of preventive obligations, processing rules, rights protection, and incident response. It also analyses typical privacy attacks and defense methods targeting LLMs, and discusses measures to enhance LLM security from a full lifecycle perspective of “data-training-inference”, highlighting the fundamental conflict between model scale expansion and privacy protection needs that must be addressed in the future.
Key words: Large Language Models; Defense Mechanism; Privacy Protection; Data Cleaning; Supervision Fine tuning
WEI Xia, ZHANG Wenjun. Research on Privacy Protection Applications of Large Language Models and Defense Against Their Own Risks[J]. WORLD SCI-TECH R&D, 2026, 48(1): 69-79.
0 / / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.globesci.com/EN/10.16507/j.issn.1006-6055.2025.12.005
https://www.globesci.com/EN/Y2026/V48/I1/69