Cryptopolitan
2026-01-29 17:45:22

Hackers are hijacking unprotected AI models to steal computing power

About 175,000 private servers are reportedly exposed to the public internet, giving hackers the opportunity to carry out their illicit activities. The problem was reported by the security researchers, SentinelOne and Censys, who tracked 7.23 million observations in over 300 days. Hackers exploit Ollama setting A recent report from SentinelOne and Censys found that over 175,000 private AI servers are accidentally exposed to the internet. These systems use Ollama, an open-source software that lets people run powerful AI models, like Meta’s Llama or Google’s Gemma , on their own computers instead of using a website like ChatGPT. By default, Ollama only talks to the computer it is installed on. However, a user can change the settings to make it easier to access remotely, which can accidentally expose the entire system to the public internet. They tracked 7.23 million observations over nearly 300 days and discovered that while many of these AI “hosts” are temporary, about 23,000 of them stay online almost all the time. These “always-on” systems are perfect targets for hackers because they provide free, powerful hardware that is not monitored by any big tech company. In the United States, about 18% of these exposed systems are in Virginia, likely due to the high density of data centers there. China has 30% of hosts located in Beijing. Surprisingly, 56% of all these exposed AI systems are running on home or residential internet connections. This is a major problem because hackers can use these home IP addresses to hide their identity. When a hacker sends a malicious message through someone’s home AI, it looks like it is coming from a regular person rather than a criminal botnet. How are criminals using these hijacked AI systems? According to Pillar Security, a new criminal network known as Operation Bizarre Bazaar is actively hunting for these exposed AI endpoints. They look for systems running on the default port 11434 that don’t require a password. Once they find one, they steal the “compute” and sell it to others who want to run AI tasks for cheap, like generating thousands of phishing emails or creating deepfake content. Between October 2025 and January 2026, the security firm GreyNoise recorded over 91,403 attack sessions targeting these AI setups. They found two main types of attacks. The first uses a technique called Server-Side Request Forgery (SSRF) to force the AI to connect to the hacker’s own servers. The second is a massive “scanning” campaign where hackers send thousands of simple questions to find out exactly which AI model is running and what it is capable of doing. About 48% of these systems are configured for “tool-calling.” This means the AI is allowed to interact with other software, search the web, or read files on the computer. If a hacker finds a system like this, they can use “prompt injection” to trick the AI. Instead of asking for a poem, they might tell the AI to “list all the API keys in the codebase” or “summarize the secret project files.” Since there is no human watching, the AI often obeys these commands. The Check Point 2026 Cyber Security Report shows that total cyber attacks increased by 70% between 2023 and 2025. In November 2025, Anthropic reported the first documented case of an AI-orchestrated cyber espionage campaign where a state-sponsored group used AI agents to perform 80% of a hack without human help. Several new vulnerabilities, like CVE-2025-1975 and CVE-2025-66959, were discovered just this month. They are flaws that allow hackers to crash an Ollama server by sending it a specially crafted model file. Because 72% of these hosts use the same specific file format called Q4_K_M, a single successful attack could take down thousands of systems at once. Join a premium crypto trading community free for 30 days - normally $100/mo.

Получите Информационный бюллетень Crypto
Прочтите Отказ от ответственности : Весь контент, представленный на нашем сайте, гиперссылки, связанные приложения, форумы, блоги, учетные записи социальных сетей и другие платформы («Сайт») предназначен только для вашей общей информации, приобретенной у сторонних источников. Мы не предоставляем никаких гарантий в отношении нашего контента, включая, но не ограничиваясь, точность и обновление. Никакая часть содержания, которое мы предоставляем, представляет собой финансовый совет, юридическую консультацию или любую другую форму совета, предназначенную для вашей конкретной опоры для любых целей. Любое использование или доверие к нашему контенту осуществляется исключительно на свой страх и риск. Вы должны провести собственное исследование, просмотреть, проанализировать и проверить наш контент, прежде чем полагаться на них. Торговля - очень рискованная деятельность, которая может привести к серьезным потерям, поэтому проконсультируйтесь с вашим финансовым консультантом, прежде чем принимать какие-либо решения. Никакое содержание на нашем Сайте не предназначено для запроса или предложения