Broadcaster Mentink Uses AI To Formulate Questions

Angie Mentink, a longtime Seattle Mariners broadcaster, responded to a viral clip showing her consulting an AI assistant to generate postgame questions. She said she "experimented" with Google Gemini earlier this season to supplement her pregame notes, framing AI as an augmentation tool rather than a replacement for journalistic judgment. The clip, filmed without her consent, prompted online criticism and gendered harassment; Mentink, who is recovering from a serious stroke this year, answered the backlash with self-deprecating humor and a reminder that veteran reporters have long adopted new tools. The episode highlights practical, ethical, and privacy questions for journalists using AI in live sports coverage.
What happened
Angie Mentink, a veteran Seattle Mariners broadcaster, faced social media backlash after a short video showed her consulting Google Gemini to generate postgame questions. The clip, filmed over her shoulder without consent, went viral and drew criticism that she used AI as a crutch rather than a research aid. Mentink pushed back on X with the line "Currently asking AI how to handle going viral for using AI," and explained she had "experimented" with AI earlier in the season to supplement her question list as she returned to work after a serious stroke.
Technical details
The assistant shown in the footage is Google Gemini, a conversational multimodal model used here as a prompt-based ideation tool. Practitioners should note three operational facts about this use case:
- •The workflow is prompt-driven ideation: short context, a task like "good questions after a tough loss in baseball," then candidate questions are reviewed and edited by the reporter.
- •This is a human-in-the-loop pattern: the journalist selects, adapts, and follows up; the AI supplies raw options, not verified facts or quotes.
- •The interface risk: using a generative model in a live or near-live environment raises latency, hallucination, and privacy exposure concerns if screens are visible to fans.
Context and significance
This is not a model-release story, but it is relevant to AI governance and newsroom operations. News and sportsrooms are rapidly adopting generative tools for research, drafting, and ideation. The episode surfaces three persistent friction points: the social optics of on-camera device use; disclosure and transparency expectations for audience-facing roles; and the privacy harms that arise when bystanders film talent without consent. It also illustrates a broader cultural pattern where tools that speed routine tasks are recast as ethical failures when used by women in high-visibility roles. For AI engineers and product teams, the case underscores the need for UI affordances that reduce exposure (e.g., ephemeral prompts, privacy screens, non-visible prompts) and for APIs that make provenance, confidence, and source attribution easy to surface.
What to watch
Expect sports networks and local broadcasters to update policies on device usage, disclosure, and on-camera workflow. Product teams at model providers should prioritize features that support ephemeral interaction and better provenance signals for generated suggestions. For newsroom leaders and ML practitioners collaborating with media, this is a reminder to pair deployment guidance with user training on ethical, privacy, and communication practices.
Scoring Rationale
The story is a solid, practical example of generative AI adoption and the social risks that follow. It is not a technical breakthrough but matters for practitioners designing UI, provenance, and operational policies for public-facing deployments.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


