Cryptopolitan
2026-01-29 17:45:22

Hackers are hijacking unprotected AI models to steal computing power

About 175,000 private servers are reportedly exposed to the public internet, giving hackers the opportunity to carry out their illicit activities. The problem was reported by the security researchers, SentinelOne and Censys, who tracked 7.23 million observations in over 300 days. Hackers exploit Ollama setting A recent report from SentinelOne and Censys found that over 175,000 private AI servers are accidentally exposed to the internet. These systems use Ollama, an open-source software that lets people run powerful AI models, like Meta’s Llama or Google’s Gemma , on their own computers instead of using a website like ChatGPT. By default, Ollama only talks to the computer it is installed on. However, a user can change the settings to make it easier to access remotely, which can accidentally expose the entire system to the public internet. They tracked 7.23 million observations over nearly 300 days and discovered that while many of these AI “hosts” are temporary, about 23,000 of them stay online almost all the time. These “always-on” systems are perfect targets for hackers because they provide free, powerful hardware that is not monitored by any big tech company. In the United States, about 18% of these exposed systems are in Virginia, likely due to the high density of data centers there. China has 30% of hosts located in Beijing. Surprisingly, 56% of all these exposed AI systems are running on home or residential internet connections. This is a major problem because hackers can use these home IP addresses to hide their identity. When a hacker sends a malicious message through someone’s home AI, it looks like it is coming from a regular person rather than a criminal botnet. How are criminals using these hijacked AI systems? According to Pillar Security, a new criminal network known as Operation Bizarre Bazaar is actively hunting for these exposed AI endpoints. They look for systems running on the default port 11434 that don’t require a password. Once they find one, they steal the “compute” and sell it to others who want to run AI tasks for cheap, like generating thousands of phishing emails or creating deepfake content. Between October 2025 and January 2026, the security firm GreyNoise recorded over 91,403 attack sessions targeting these AI setups. They found two main types of attacks. The first uses a technique called Server-Side Request Forgery (SSRF) to force the AI to connect to the hacker’s own servers. The second is a massive “scanning” campaign where hackers send thousands of simple questions to find out exactly which AI model is running and what it is capable of doing. About 48% of these systems are configured for “tool-calling.” This means the AI is allowed to interact with other software, search the web, or read files on the computer. If a hacker finds a system like this, they can use “prompt injection” to trick the AI. Instead of asking for a poem, they might tell the AI to “list all the API keys in the codebase” or “summarize the secret project files.” Since there is no human watching, the AI often obeys these commands. The Check Point 2026 Cyber Security Report shows that total cyber attacks increased by 70% between 2023 and 2025. In November 2025, Anthropic reported the first documented case of an AI-orchestrated cyber espionage campaign where a state-sponsored group used AI agents to perform 80% of a hack without human help. Several new vulnerabilities, like CVE-2025-1975 and CVE-2025-66959, were discovered just this month. They are flaws that allow hackers to crash an Ollama server by sending it a specially crafted model file. Because 72% of these hosts use the same specific file format called Q4_K_M, a single successful attack could take down thousands of systems at once. Join a premium crypto trading community free for 30 days - normally $100/mo.

获取加密通讯
阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约