Meta Tests Meta AI Account That Cannot Be Blocked

Meta is testing a Threads feature that lets users tag a public Meta AI account to get answers or context, according to The Verge and Engadget. The test is limited to users in Argentina, Malaysia, Mexico, Saudi Arabia, and Singapore, per The Verge. Multiple reports from The Verge and Engadget say Threads currently does not offer a block option on the Meta AI profile, and some users report attempting the platform's report flow but not seeing the usual block outcome (Engadget). Threads shows "Users cannot block Meta AI" as a trending topic with more than one million posts, Engadget reports. A Meta spokesperson told The Verge, "Users can manage their Meta AI experience during the test," according to Christine Pai.
What happened
Meta is testing a Threads feature that lets users tag a public Meta AI account to request answers or context in conversations, The Verge reports. The test is initially available in Argentina, Malaysia, Mexico, Saudi Arabia, and Singapore, according to The Verge. Both The Verge and Engadget report that the Meta AI profile currently lacks a standard "block" option in the three-dots menu, and several users attempting the platform's report flow did not see a resulting block action (Engadget). Engadget also reports that "Users cannot block Meta AI" became a top trend on Threads with more than one million posts. A Meta spokesperson, Christine Pai, told The Verge, "Users can manage their Meta AI experience during the test."
Editorial analysis - technical context
Industry-pattern observations: social platforms have recently experimented with AI accounts and conversational bots as visible accounts rather than purely background services. Reporting frames Meta's approach as a public-facing bot similar to xAI's Grok, a pattern that shifts some user-AI interactions into standard reply threads rather than a private assistant. Companies running visible bot accounts typically tune discovery and recommendation controls, but selecting which account-level controls to expose is a product design choice that affects moderation and user experience.
Context and significance
The story matters because it sits at the intersection of content moderation, feed ranking, and user controls. Public backlash to features that cannot be blocked or fully opted out of tends to escalate quickly on social platforms, as seen in earlier incidents reported by Engadget and The Verge. For practitioners, this highlights how product choices about control surfaces (block, mute, "not interested") shape downstream moderation workload and signal design priorities to users.
What to watch
Observers should watch for an official Meta support article or product update clarifying available controls, and for whether Meta expands the test beyond the initial countries reported by The Verge. Also monitor Threads' developer and trust-and-safety channels for policy notes on how AI account replies are handled in ranking, reply hiding, and reporting flows. If Meta provides technical docs or a changelog, they will be the primary source for how the feature is intended to operate.
Scoring Rationale
The item is a notable product usability and moderation story from a major platform, relevant to practitioners who build or moderate social AI features. It is not a frontier-model or infrastructure milestone, so its importance is mid-range.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems
