MistralAI PyPI Package Delivers Credential-Stealing Malware

According to Microsoft's threat intelligence team, as reported by GBHackers, version 2.4.6 of the mistralai package on PyPI contained injected malicious code that executes on import and targets Linux hosts. The injected logic sets the MISTRAL_INIT environment flag, attempts to download a secondary payload from 83.142.209.194 (saved as /tmp/transformers.pyz), and executes it, per GBHackers' summary of Microsoft reporting. The secondary payload functions as a credential stealer and includes geo-aware behavior that avoids Russian-language environments and implements a destructive branch with a 1-in-6 chance to run rm -rf / when the system appears to be in Israel or Iran. GBHackers, citing Microsoft, also highlights persistence and visibility artifacts to hunt for, including the pgmonitor.py file and a pgsql-monitor.service systemd unit.
What happened
According to Microsoft's threat intelligence team as reported by GBHackers, version 2.4.6 of the mistralai package published to PyPI contained injected malicious code in mistralai/client/__init__.py. The injected code is designed to run automatically when the package is imported on Linux hosts and to set an environment flag MISTRAL_INIT before attempting to retrieve a secondary payload from an attacker-controlled host at 83.142.209.194, saving it as /tmp/transformers.pyz and executing it.
Technical details
Per GBHackers' summary of Microsoft reporting, the secondary payload functions primarily as a credential stealer, harvesting secrets and access tokens found on the compromised system. The malware includes country-aware logic that avoids execution in Russian-language environments and contains a geofenced destructive branch with a 1-in-6 probability of executing rm -rf / when indicators suggest the system is in Israel or Iran. Reported indicators of compromise include the pgmonitor.py artifact and a pgsql-monitor.service systemd unit.
Editorial analysis - technical context
Supply-chain attacks that inject code into published packages commonly exploit implicit trust in package names and filenames. The use of a filename like transformers.pyz is consistent with a technique to blend with machine-learning ecosystems and developer workflows. Industry-pattern observations show that code executing on import greatly increases risk, because vulnerable hosts can be infected during normal build, test, or runtime import without explicit execution by a human.
Context and significance
For ML projects, a compromised client library can expose cloud credentials, model-repository tokens, and CI/CD secrets, amplifying impact across projects and teams. Observed patterns in comparable supply-chain incidents indicate that runtime monitoring, strict package provenance checks, and isolation of build environments reduce blast radius. This incident underscores a persistent trend in which attackers target developer-facing packages to reach higher-value cloud and data plane assets.
What to watch
Look for vendor advisories or removals on PyPI for mistralai and for coordinated disclosures from MistralAI or PyPI. Forensic indicators to search for include MISTRAL_INIT environment activity, presence of /tmp/transformers.pyz, contacts to 83.142.209.194, pgmonitor.py, and pgsql-monitor.service. Industry observers will also monitor whether this incident prompts wider changes to package verification and supply-chain detection practices.
Scoring Rationale
A backdoored official client on PyPI poses a direct supply-chain threat to ML development and production environments. The combination of import-time execution, credential theft, and geo-aware destructive logic raises operational and security stakes for practitioners.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

