Australians Demand Brands Disclose AI-Generated Content

Meltwater and YouGov released a global report, "Trust in the Age of Generative AI," finding that 86% of consumers want brands to explicitly label AI-generated content. Australians are especially cautious: 89% call for stronger government regulation and 86% say disclosure is important. The research, drawn from nearly 10,000 respondents across seven markets, shows a trust gap where 32% would trust brands less if content is AI-generated while 15% would trust them more. Practitioners should treat disclosure as both compliance and product design work: provenance, labeling, and communication strategy will shape consumer trust and brand risk.
What happened
Meltwater and YouGov published a global study, "Trust in the Age of Generative AI," surveying nearly 10,000 consumers across seven markets. The report finds 86% of respondents believe AI-generated content should be disclosed, and Australians are among the most cautious with 89% calling for stronger government regulation. The data surfaces a clear trust trade-off: 32% would trust brands less if content is AI-generated, while 15% would trust them more. "As awareness of generative AI grows, trust is not disappearing, but becoming increasingly conditional," said Ross Candido, VP ANZ at Meltwater.
Technical details
The report frames public attitudes around GenAI adoption and identifies measurable signals brands can act on. Key quantitative findings include:
- •86% want disclosure of AI-generated content.
- •89% of Australians support stronger government regulation.
- •32% would trust brands less if content is AI-generated; 15% would trust them more.
- •39% report excitement about AI while 51% disagree.
- •58% believe they can identify AI-generated content.
- •73% are concerned about misinformation; online mentions rose 53%, with media driving 34% of coverage.
Practical implications for practitioners The findings are operationally actionable for product, communications, and legal teams. Implement content provenance, visible labeling, and audit trails as first-order features. Consider integrating industry standards such as C2PA and adding metadata flags to content pipelines. Test disclosure formats with A/B experiments and instrument trust metrics (brand lift, share-of-voice, complaint rates). From a risk perspective, build monitoring for misinformation amplification and retention of human-in-the-loop controls where accuracy is critical.
Context and significance
The report intersects with a broader regulatory trend toward mandatory AI transparency. Consumer expectations shown here are consistent with recent regulatory moves in multiple jurisdictions pushing for disclosure and provenance in generative content. For brands, disclosure is not purely compliance theater; it is a product signal that can protect reputation or, if mismanaged, accelerate distrust. For ML teams, this raises implementation questions: how to watermark, how to store and surface creation metadata, and how to integrate disclosure without degrading user experience or model utility.
What to watch
Expect accelerated adoption of standardized labeling frameworks, growing regulatory guidance in ANZ and other markets, and new platform features that surface content provenance. Brands that embed transparent design and measurement into their content pipelines will likely convert disclosure into a competitive trust advantage.
Scoring Rationale
The report signals meaningful operational and regulatory pressure on brands and product teams to implement disclosure and provenance features. It is not a technical breakthrough, but the consumer mandate raises implementation and compliance priorities for practitioners.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



