Skip to content

Sam Altman's Home Was Hit by a Molotov Cocktail on Friday. On Sunday, Someone Came Back With a Gun.

DS
LDS Team
Let's Data Science
6 min
Three suspects are in custody after two separate attacks on the OpenAI CEO's San Francisco residence in 48 hours, escalating fears that anti-AI anger is turning violent across the tech industry.

Shortly before 4 a.m. on Friday, April 10, a security camera near Chestnut and Jones streets in San Francisco's Russian Hill neighborhood recorded a 20-year-old man walking up to the front gate of Sam Altman's home with a glass bottle in his hand. The bottle had a rag stuffed into it. The rag was on fire.

Daniel Alejandro Moreno-Gama threw the Molotov cocktail at the metal gate, igniting a small fire. Private security guards stationed at the property extinguished the flames within minutes. No one was injured. Moreno-Gama fled on foot.

Just after 5 a.m., he showed up at OpenAI's headquarters in the Mission Bay district. He allegedly threatened to burn the building down. Officers from the San Francisco Police Department recognized him from the surveillance footage at Altman's home and arrested him on the spot.

The charges filed against Moreno-Gama include attempted murder, arson, and possession of an incendiary device. He was booked into San Francisco County Jail.

That was Friday. The weekend was worse.

A Second Attack Came 46 Hours Later

At 1:40 a.m. on Sunday, April 12, a Honda sedan pulled up in front of Altman's property on the Lombard Street side. The car slowed, passed the house, then doubled back. According to the initial police report, the passenger fired a single gunshot at the residence.

Surveillance footage and on-site security personnel confirmed the incident. The vehicle's license plate was captured on camera. SFPD located the suspects without incident at the 2000 block of Taylor Street, less than a mile away.

Amanda Tom, 25, and Muhamad Tarik Hussein, 23, were arrested on suspicion of negligent discharge of a firearm. Officers executing a search warrant at their residence recovered three firearms.

The San Francisco Police Department said there is no evidence linking the two attacks. The FBI is coordinating with local police on both investigations. The San Francisco District Attorney's office said decisions on jurisdiction and final charges would come within a week.

The Attacks Landed During the Worst Week of Altman's Public Life

The Molotov cocktail hit Altman's gate on the same day that The New Yorker published a sweeping investigative profile of him, written by Ronan Farrow and Andrew Marantz.

The piece, based on interviews with more than 100 people, painted Altman as a leader driven by "a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart." One unnamed OpenAI board member offered a blunt assessment: "He's unconstrained by truth." The board member described Altman as having "a strong desire to please people, to be liked in any given interaction" combined with "almost a sociopathic lack of concern for the consequences that may come from deceiving someone."

Altman published a blog post Friday evening responding to both the attack and the profile. He acknowledged "a lot of things I'm proud of and a bunch of mistakes," and said a tendency toward "being conflict-averse" had "caused great pain for me and OpenAI."

On the attacks themselves, Altman said: "The fear and anxiety about AI is justified. We are in the process of witnessing the largest change to society in a long time, and perhaps ever."

OpenAI's official statement was brief: "Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city."

A Pattern That Security Experts Say Is Accelerating

The attacks on Altman's home are part of a series of escalating threats against AI companies and their leaders that security professionals say has been building for months.

In November 2025, a 27-year-old man making violent threats at OpenAI's San Francisco headquarters prompted an office-wide lockdown. Earlier in the same week as the Altman attacks, a shooting occurred at an Indiana official's home; a note left at the scene read "No data centers." The anti-AI activist group Stop AI denied involvement in the Altman incident but reaffirmed its opposition to "frontier AI systems."

Kent Moyer, CEO of The World Protection Group, a firm that provides executive protection services, told the SF Standard that "executives are more vulnerable than ever" and warned that "across the country, threats are going up."

The pattern has been building for months:

DateIncident
December 2024UnitedHealthcare CEO Brian Thompson fatally shot outside Manhattan hotel, resetting executive security baselines nationwide
November 2025Man making violent threats at OpenAI's SF headquarters triggers office-wide lockdown
Early April 2026Shooting at Indiana official's home; note reads "No data centers"
April 10, 2026Molotov cocktail thrown at Altman's home; suspect later threatens OpenAI HQ
April 12, 2026Gunshot fired at Altman's home from passing vehicle; two suspects arrested

OpenAI is actively recruiting for industrial security and corporate security roles, according to current job postings. The company has faced protests at its San Francisco offices throughout 2026, driven by opposition to its partnership with the U.S. Department of Defense and broader concerns about AI's impact on employment, privacy, and resource consumption.

The Counterargument: Anger That Has Legitimate Roots

The people protesting AI companies are not a monolith, and dismissing them as fringe actors misses the structural forces driving the backlash.

AI data centers consume enormous amounts of electricity and water. The technology is displacing workers across industries, from copywriters to customer service representatives to junior software engineers. OpenAI's collaboration with the Pentagon remains deeply controversial, even after Altman called the initial deal "sloppy." And the New Yorker profile raised governance questions that go beyond personality: whether a single individual should control the trajectory of a technology that its own creators describe as potentially the most consequential in human history.

None of this justifies violence. But the gap between the concerns driving anti-AI activism and the responses offered by AI companies has widened throughout 2026. When Altman proposed robot taxes, a public wealth fund, and a four-day workweek in March, critics like Gary Marcus, the NYU cognitive scientist, called it "a cover story" designed to deflect regulatory attention rather than address root causes.

The question that security experts, policymakers, and AI executives are now grappling with is not whether the backlash will continue. It is whether the industry can address the legitimate grievances fueling it before the next attack causes real harm.

The Bottom Line

In the span of 48 hours, three people were arrested for two separate attacks on the home of the most visible figure in artificial intelligence. A Molotov cocktail. A gunshot. Three firearms seized. An FBI investigation. And a New Yorker profile that landed like accelerant on a fire that was already burning.

The attacks did not injure anyone. The property damage was minimal. But the symbolic weight is significant: the person most associated with building AI that could reshape civilization is now living with armed security, surveillance cameras, and the knowledge that strangers have targeted his home twice in a single weekend.

Altman himself framed the moment as a reckoning rather than a random act. "The fear and anxiety about AI is justified," he wrote. The question is what follows from that admission. If the companies building the most powerful AI systems cannot demonstrate that they take public concerns seriously through action rather than blog posts, the security teams will keep getting busier.

As Moyer put it: "Across the country, threats are going up." Friday and Sunday proved he was right.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths