BOOK THIS SPACE FOR AD
ARTICLE ADCheckmarx threat research team in a report shared with Hackread.com revealed the dangers posed by seemingly trusted AI models harboring backdoors. Dubbed Llama drama; the vulnerability impacts the llama_cpp_python package potentially allowing attackers to execute arbitrary code and compromise data and operations.
The vulnerability affects over 6,000 AI models on trusted platforms like Hugging Face, highlighting the need for AI platforms and developers to address supply chain security challenges.
It is important to mention that the vulnerability was initially discovered by a cybersecurity researcher known by the handle @retr0reg on X (Twitter).
LLM popular Dependency Llama-cpp-Python RCE 0-day✅
Found a week after my 15th birthday. Around 1.3k Project are effected (via GH Search), including your favourite langchain, llama-index….
This felt great but also reminds of how effective Supply-Chain Attack can be in ML/AI. pic.twitter.com/UUXJOBP94Z
Vulnerability Details
CVE-2024-34359 is a critical vulnerability resulting from the misuse of the Jinja2 template engine in the `llama_cpp_python` package, allowing attackers to exploit a hole in the package’s use of Jinja2.
The issue is in the template data processing, which is done without proper security measures like sandboxing. Although Jinja2 supports sandboxing it was not implemented in this case, which could lead to arbitrary code execution on the host system.
For your information, Jinja2 is a Python library used for template rendering and HTML generation but it can be a security risk if not configured correctly. Conversely, the llama_cpp_python package integrates Python’s ease with C++’s performance. It is ideal for complex AI models handling large data volumes but can be exposed to template injection attacks.
Risk Assessment
This vulnerability, as per Checkmarx’s report, is critical as AI systems process sensitive datasets. Such vulnerabilities expose them to risks like unauthorized actions, data theft, system compromise, and operational disruption, affecting individual privacy and organizational integrity.
The security of AI systems is crucial, as their supply chains depend on third-party libraries and frameworks. Given AI systems’ extended attack surface through integration across systems, a single component’s vulnerability can impact the entire system.
The good news is that the vulnerability has been fixed in version 0.2.72, with the addition of sandboxing and input validation measures. Organizations are advised to update promptly for system security.
Still, it highlights a risk in our increasingly connected world. Many AI models are shared online, and if one has this weakness, it could spread like a virus. This is a wake-up call for developers and AI platforms to be wary of software supply chain security loopholes. Just like checking your ingredients before you cook, it’s important to make sure the software you use is safe and secure.