Klarna CMO builds AI replica for employee venting
Business Insider reports that Klarna chief marketing officer David Sandström created an AI replica of himself to act as an internal "venting machine," speaking about the setup on a webinar hosted by ElevenLabs. Sandström told colleagues the replica was programmed to always be friendly, ask for forgiveness, and take the blame, and he said employees should "call this number, get it out of the system" so meetings could remain forward-focused. Business Insider reports Sandström framed the tool as helpful during a period of budget cuts. Editorial analysis: Companies using internal AI "replicas" for emotional outlets may reduce meeting friction but raise questions about psychological safety, consent, and moderation.
What happened
Business Insider reports that Klarna chief marketing officer David Sandström created an AI replica of himself to serve as an internal "venting machine" for employees, and described the setup on a webinar hosted by ElevenLabs. Per Business Insider, Sandström said the replica was instructed to always be friendly, ask for forgiveness, and accept blame. Business Insider also reports Sandström told staff to "call this number, get it out of the system" so that in-person meetings could focus on the future. The article situates the use during a period when Sandström said the organisation was navigating budget cuts.
Technical details
Business Insider attributes the demonstration to a webinar with ElevenLabs but does not disclose the underlying model name, hosting configuration, or integration details. The published account includes Sandström's description of behaviour rules for the replica, but Business Insider does not report whether the replica used voice cloning, fine-tuned dialogue models, or internal data from Klarna.
Editorial analysis - technical context
Industry-pattern observations: Organizations that deploy personalised conversational agents for internal use typically face choices about data scope, consent capture, and retention policies. Such systems often rely on voice cloning, transcript storage, and conversational logs, which can create privacy and HR governance requirements distinct from public-facing chatbots.
Context and significance
Editorial analysis: The episode highlights two practitioner-facing trends: employer experimentation with AI for internal communications, and cross-functional hiring discussions exemplified by Sandström's remark about wanting to hire "marketing engineers," as reported by Business Insider. These trends reflect growing interest in building operational roles that combine marketing domain knowledge with engineering or AI-integration skills.
What to watch
For practitioners: monitor how companies document consent flows for employee-facing agents, whether external vendors are used for voice/modeling, and how HR and legal teams classify conversation records. Observers should also watch for early governance patterns and third-party guidance on psychological-safety safeguards.
Quoted material
All reported quotes and behavioural details about the replica are attributed to Business Insider's coverage of Sandström's webinar comments.
Scoring Rationale
Notable practical example of employee-facing AI that raises governance and privacy questions relevant to practitioners building internal agents. The story is directly actionable but not a technical breakthrough.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

