AI Leak Fuels Malware Scams. Company source code is proprietary and typically held as top secret. However, a recent software leak accident by Anthropic has led to a cascade of nefarious behaviours by hackers. Anthropic is the well-known creator of Claude AI, and the accidental leak of the source code has allowed scammers to create malware that’s a financial danger to the public and developers.
This new cunning approach to scam has taken a different twist to an old crime. The cybercriminals have taken advantage of the viral explosion regarding this leak and have formulated a large mixture of malware that is attractive to developers. They create fake GitHub repositories that make false claims to be able to host “enterprise/unlocked” versions of the code so that they can entice and trap developers and spread their malware. The threat actors are so good at convincing developers that at least one cybercriminal’s malicious repository inched up to Google’s first page when searching on words such as “leaked Claude Code.”
Here’s what happens when a developer falls prey to the deception: a 7-Zip archive file is downloaded that contains ClaudeCode_x64.exe instead of an AI tool. The executable file launches two threat files: GhostSocks and Vidar. GhostSocks is a tool used as a proxy that transitions the computer of the victim into a malicious traffic relay and that’s sold to other threat actors on the dark web. Vidar has been a known “info-stealer” that swoops up browser cookies, passwords and wallet data for cryptocurrency.
This is not the first problem faced by Anthropic. Most recently a security company found “ShadowPrompt”, an error in the Chrome extension of Claude that gives the ability for data theft via zero-click attacks. Other groups have found more vulnerabilities in the Anthropic code that they are calling “Cloudy Day.” While Anthropic has quickly jumped on these flaws, the fake versions on GitHub platforms and others are out of their control. One of the problems may be the fast-paced growth of the company. In dealing with these errors Anthropic has even gone to the effort of approaching peak hour usage to try to keep up with the massive demand. Cybercriminals are ecstatic at the popularity and take advantage of the surge in interest and downloads as they are hoping the downloads catch on and spread like wildfire.
All of this has a domino effect to the general public as developers that are oblivious to the malware and malicious software incorporate them into their code. The community of developers have been advised to only use their trusted channels and to avoid any offers of “no usage limits” or “unlocked features” that may be promised. These offers should be a warning flag.
It’s anticipated that everyone should expect continued sophisticated attacks such as these as the popularity of AI grows. Cybercriminals will advance their approaches to areas that are not expected and everyone should be on the lookout.
“AI is the current method that has attracted cybercriminals. DaVinci Cybersecurity shared the most recent information to assist companies, the general public and developers alike to protect from malicious and harmful malware.
– Sharon Knowles, CEO DaVinci Cybersecurity
Source:

