Cybersecurity experts have identified six critical vulnerabilities in the Ollama AI framework, which could be exploited for attacks ranging from denial-of-service (DoS) to model poisoning and theft.
“Collectively, the vulnerabilities could allow an attacker to carry out a wide range of malicious actions with a single HTTP request, including denial-of-service (DoS) attacks, model poisoning, model theft, and more,” said Avi Lumelsky, a researcher at Oligo Security, in a report published last week.
Ollama is a widely-used, open-source platform enabling the deployment of large language models (LLMs) across Windows, Linux, and macOS. With over 7,600 forks on GitHub, its popularity highlights the urgency of addressing these issues.
Overview of the Vulnerabilities:
- CVE-2024-39719 (CVSS score: 7.5) – Exploitable through the
/api/create
endpoint, allowing attackers to detect the existence of files on the server.
Status: Patched in version 0.1.47 - CVE-2024-39720 (CVSS score: 8.2) – An out-of-bounds read via the
/api/create
endpoint, leading to application crashes and DoS conditions.
Status: Patched in version 0.1.46 - CVE-2024-39721 (CVSS score: 7.5) – A vulnerability that leads to resource exhaustion and a DoS state by repeatedly passing
/dev/random
to the/api/create
endpoint.
Status: Patched in version 0.1.34 - CVE-2024-39722 (CVSS score: 7.5) – A path traversal flaw in the
api/push
endpoint, exposing server files and the complete directory structure.
Status: Patched in version 0.1.46 - Model Poisoning Risk – Exploitable through the
/api/pull
endpoint when sourced from untrusted locations.
Status: Unpatched, users advised to restrict internet-facing endpoints. - Model Theft Risk – Possible via the
/api/push
endpoint to untrusted destinations.
Status: Unpatched, mitigations include proxy or WAF protection.
Mitigation Recommendations:
To reduce risk, maintainers recommend deploying web application firewalls and ensuring only necessary endpoints are exposed. “Meaning that, by default, not all endpoints should be exposed,” Lumelsky stressed, highlighting the risks of misconfigurations.
Exposure Stats: Oligo’s analysis revealed 9,831 internet-facing instances, primarily in China, the U.S., Germany, and other global locations, with 25% deemed vulnerable.
Historical Context:
This discovery follows a severe flaw (CVE-2024-37032) reported by Wiz, which could have led to remote code execution, demonstrating the recurring risks in improperly secured AI frameworks.
“Exposing Ollama to the internet without authorization is akin to exposing the docker socket to public access,” Lumelsky noted, emphasizing the framework’s model handling features.