Welcome toHome

【mk】‘Using AI to secure AI’ in large language model applications: entrepreneur

Source:MK sports time:2025-04-12 19:04:38

Zhou Hongyi,<strong><a href=mk a member of the 14th National Committee of the Chinese People's Political Consultative Conference and chairman of Chinese cybersecurity firm 360 Group Photo: VCG" src="https://www.globaltimes.cn/Portals/0/attachment/2025/2025-03-09/5c780027-9948-4185-8218-7e96183de150.jpg" />

Zhou Hongyi, a member of the 14th National Committee of the Chinese People's Political Consultative Conference and chairman of Chinese cybersecurity firm 360 Group Photo: VCG


As China's artificial intelligence (AI) technology advances rapidly, large language models (LLMs) are a "double-edged sword," bringing both efficiency gains and challenges that should be viewed rationally, a well-known Chinese entrepreneur told Global Times on the sidelines of the ongoing "two sessions."

"AI is reshaping every industry, and domestic models like DeepSeek are driving technological innovation and China's economic growth," Zhou Hongyi, a member of the 14th National Committee of the Chinese People's Political Consultative Conference (CPPCC) and founder of 360 Group, told the Global Times on Sunday. Specialized AI models will be deployed across governments and enterprises as intelligent agents, integrating internal knowledge bases to drive digital transformation and efficiency, he said.

Zhou said challenges such as hallucinations and content security cannot be addressed by traditional security solutions. "Any security vulnerability in the underlying model, knowledge base, or intelligent agent within government or corporate systems could trigger systemic risks in the entire production environment," he said.

There are growing concerns over AI hallucinations, where LLMs generate articulate yet misleading information, Zhou said, warning that in policymaking, legal interpretation, and business decisions, such errors could pose security risks if misleading outputs influence critical processes.

However, "hallucinations are not just a flaw but an inherent trait of AI. A model without hallucinations wouldn't be intelligent," Zhou said, noting that this ability to draw unexpected connections is crucial. For example, it can help imagine new drug molecules or protein structures, driving innovation beyond traditional methods.

AI hallucinations can be mitigated through techniques like retrieval-augmented generation (RAG), which corrects outputs by cross-referencing professional knowledge bases and real-time online information. 

Zhou recommended a flexible regulatory strategy that tolerates certain errors instead of swiftly penalizing AI models for minor missteps. He proposed allowing companies more time to rectify their mistakes, thereby encouraging robust innovation and competition. This approach could enable more firms to emulate DeepSeek's achievements.

Besides hallucinations, knowledge base security and intelligent agent security are two other major security concerns in AI deployment, according to Zhou. 

He added that core data assets of governments and enterprises are stored in internal knowledge bases, which, without adequate protections, might be exploited to extract sensitive information through AI interactions.

"For example, if critical documents are input into a knowledge base for training an AI, there's a risk that the model could inadvertently disclose all it has learned, allowing unauthorized access to confidential data," Zhou said.

Zhou detailed how intelligent agents are extensively integrated into the IT systems of governments and businesses, making them susceptible to cyberattacks that could lead to significant disruptions. These disruptions could range from sending harmful emails to halting production lines, potentially leading to serious consequences.

The Global Times has found that the lifecycle of an AI model - which includes data preparation, training and deployment - introduces multiple points of vulnerability. Attackers can exploit these phases to disrupt operations, bypass security measures, or initiate unauthorized actions, resulting in service outages. Furthermore, the openness of LLMs has increased their risks of data poisoning, backdoor attacks and adversarial manipulation, compounding security challenges.

Zhou called for reshaping security frameworks with AI, advocating a "using AI to secure AI" approach. He proposed using security AI models to build systems covering foundation models, knowledge bases, and intelligent agents, ensuring a secure digital transformation for governments and enterprises.  

He stressed the need to advance security technology innovation and implementation, calling for policies to support leading enterprises with "security plus AI" solutions for foundation models, knowledge bases and intelligent agents. Leveraging security AI models, these firms can accelerate research and embed security across the AI lifecycle.