Researchers Find 575 Malicious Skills on Hugging Face, ClawHub
Security researchers have documented an active malware distribution campaign that abused AI distribution platforms. According to Acronis TRU, attackers implanted 575+ malicious skills across 13 developer accounts in the OpenClaw/ClawHub marketplace and used Hugging Face repositories to host payloads, delivering trojans, cryptominers, and the macOS infostealer AMOS. Reporting by CyberPress and Socket Research independently flagged hundreds of malicious OpenClaw packages; CyberPress reports 314 skills were flagged by multiple vendors and ties a prolific publisher to the account "hightower6eu." CybersecurityNews reports that OpenClaw has integrated VirusTotal Code Insight scanning into ClawHub. Editorial analysis: this campaign demonstrates how AI agent ecosystems and model repositories can become high-value distribution channels for commodity malware when publication barriers and sandboxing are weak.
What happened
Acronis TRU reported active abuse of AI distribution platforms, finding 575+ malicious skills across 13 developer accounts in the OpenClaw/ClawHub ecosystem, and documenting payload hosting on Hugging Face repositories, per the Acronis report. The identified payloads target both Windows and macOS and include trojans, cryptominers, and the macOS infostealer AMOS, according to Acronis TRU. CyberPress and Socket Research conducted independent analyses and reported hundreds of malicious OpenClaw packages; CyberPress states 314 skills were flagged by multiple vendors and associates a prolific publisher with the account "hightower6eu," per CyberPress reporting. CybersecurityNews reports that OpenClaw integrated VirusTotal Code Insight scanning into ClawHub to automate detection.
Technical details
Per the Acronis TRU writeup, attackers used multi-stage infection chains and social engineering to convince users to execute installer steps or fetch external, password-protected archives. Acronis documents techniques including obfuscated Base64 scripts, in-memory execution, process injection, covert command-and-control (C2) channels, and indirect prompt injection that causes agents to retrieve and run attacker-controlled payloads. Researchers also note that Hugging Face repositories were used as staging infrastructure, hosting serialized model or dataset files that execute attacker-controlled code at load time when not sandboxed.
Editorial analysis - technical context: These tactics combine familiar malware tradecraft (droppers, obfuscation, persistence) with agent-specific vectors: skills and model artifacts can embed social-engineered instructions or remote fetch steps that turn otherwise useful automation into a delivery mechanism. The practical risk increases where clients run unreviewed skills, load model files without isolation, or grant agents broad execution privileges.
Context and significance
Industry context
Public reporting and vendor research cited weak publication barriers and limited automated review as enabling factors. A secondary analysis referenced a Snyk "ToxicSkills" scan that found prompt injection vulnerabilities and a large surface of potentially malicious content in ClawHub, as reported in prior coverage. The combination of low friction for publishing small skill packages (often a SKILL.md plus minimal metadata), limited code signing, and the popularity of agent marketplaces creates an attractive channel for commodity attackers to scale distribution.
Editorial analysis: For organizations that integrate third-party skills or download community models, the incident raises operational questions about supply-chain hygiene, artifact scanning, and runtime isolation. The publication of Indicators of Compromise (IOCs) and detection telemetry by Acronis provides immediate detection artifacts that defenders can operationalize.
What to watch
For practitioners: monitor vendor and research feeds for IOC updates (Acronis TRU published IOCs), watch for additional accounts or publisher clusters tied to the same C2 infrastructure, and track platform-level controls such as the rollout and efficacy of Code Insight scanning (reported integration with VirusTotal). Observers should also watch whether platforms change publishing requirements, implement mandatory sandboxing, or add automated behavior analysis for skills and model artifacts.
Editorial analysis: This incident is an early example of supply-chain style abuse targeted at AI ecosystems rather than traditional package repositories. The broader implications for agent safety and platform governance will depend on how quickly marketplaces adopt stronger publishing controls, automated scanning, and default runtime isolation. Acronis TRU and multiple independent vendors documenting overlapping clusters increase confidence that the activity is in-the-wild rather than isolated research artifacts.
Scoring Rationale
This is a notable security incident for ML practitioners because it demonstrates large-scale, in-the-wild abuse of AI marketplaces and model repositories to deliver commodity malware. The technical techniques are familiar but the delivery channel targets AI-specific ecosystems, raising operational risks for teams that consume community skills and models.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

