WORLD SCI-TECH R&D ›› 2026, Vol. 48 ›› Issue (1): 69-79. doi: 10.16507/j.issn.1006-6055.2025.12.005 cstr: 32308.14.1006-6055.2025.12.005

Previous Articles    

Research on Privacy Protection Applications of Large Language Models and Defense Against Their Own Risks

WEI Xia1 ZHANG Wenjun2   

  1. 1.Xi'an Mingde Institute of Technology; 2.Shaanxi Branch of National Computer Network Emergency Response Technical Team/Coordination Center
  • Published:2026-02-28

Abstract: With the widespread application of large language models (LLMs) across various fields, issues related to privacy governance and their own risks have become increasingly prominent. This paper systematically explores the dual nature of LLMs in privacy protection: on one hand, LLMs, as intelligent tools, can enhance data security capabilities, such as improving the accuracy of code vulnerability detection tasks; on the other hand, they face typical privacy attacks, including gradient leakage, membership inference, and personal identity information disclosure, posing significant privacy risks. Based on China’s legal framework for cyberspace governance, this paper reviews the compliance applications of LLMs in privacy protection according to the structure of preventive obligations, processing rules, rights protection, and incident response. It also analyses typical privacy attacks and defense methods targeting LLMs, and discusses measures to enhance LLM security from a full lifecycle perspective of “data-training-inference”, highlighting the fundamental conflict between model scale expansion and privacy protection needs that must be addressed in the future.

Key words: Large Language Models; Defense Mechanism; Privacy Protection; Data Cleaning; Supervision Fine tuning