Amplitude Reveals Generational AI Trust Gap Costing Businesses

Amplitude's national survey of more than 1,000 Australian office workers exposes a sharp generational divide in AI trust and usage that is reducing organisational value capture. Younger workers (18-24) are far more likely to use AI daily (39%) and to trust AI recommendations over their own judgement (31%) than older workers (55-64), where trust falls to 4% and daily use to 20%. Despite frequent use by junior staff, only 13% of 18-24s say AI is core to their organisation, and younger employees mostly upskill outside work (40% outside vs 32% during work). The result is a leadership-driven adoption gap, increased shadow-AI risk, and a widening skills pipeline problem unless companies invest in leader-led strategy, governance, and workplace training.
What happened
Amplitude, the AI analytics platform, released a national survey of more than 1,000 Australian office workers showing a pronounced generational trust gap in AI. The study found 39% of workers aged 18-24 use AI tools daily versus 20% of those aged 55-64, and trust in AI recommendations is 31% for 18-24s compared with 4% for 55-64s. Yet only 13% of 18-24s and 9% of 25-34s say AI is core to their organisation, indicating frequent local use without strategic leadership direction.
Technical details
The research segmented respondents by age cohorts and measured frequency of AI use, relative trust in AI recommendations, and where upskilling occurs. Key metrics on first mention: 39%, 20%, 31%, 4%, 40% upskilling outside work, 32% during work, and only 5% using mentorship or peer learning. The survey is positioned as a press release-style study; methodology notes are limited in public variants, so treat absolute percentages as directional signals rather than fully audited population estimates. Still, the sample size and consistent gaps across measures make the pattern credible for planning.
Why it matters
The pattern creates three operational risks for practitioners and leaders. First, a leadership trust deficit can prevent the design and rollout of governance, metrics, and integration work required to move pilots into production and embed ML-driven decisioning. Second, heavy usage by junior staff outside formal structures increases shadow-AI risk: inconsistent tooling, undocumented prompts, and untracked data flows. Third, the observed upskilling behaviour, where younger employees learn AI skills largely outside work, signals both a retention opportunity and an organizational failure to invest in talent development.
Practical implications for teams
Organisations that leave AI adoption to ad hoc, junior-led experimentation will see uneven ROI and compliance gaps. To capture value, product, data-science, and engineering leaders need to align on policy, telemetry, and training. Recommended priorities for immediate action include:
- •Establish leader-led AI strategy and clear policies that define acceptable use, data handling, and escalation paths
- •Measure adoption and outcomes with instrumentation that tracks AI-driven decisions, model provenance, and key business metrics
- •Create structured, work-time upskilling programs and mentorship to shift informal learning into organizational capability
- •Audit shadow-AI risk by inventorying tools and third-party APIs, and standardise approved toolsets
Context and significance
This study arrives as the Australian government deepens engagement with AI firms and safety initiatives, including public partnerships with vendors such as Anthropic and the formation of the AI Safety Institute. The gap mirrors global patterns where younger cohorts adopt and trust AI faster, but contrasts with leadership conservatism that slows enterprise integration. For ML practitioners, the finding reframes two common debates: whether to prioritise model innovation or adoption scaffolding, and how to design governance that scales human-machine workflows. The research suggests near-term returns will follow governance and training investments, not just new models.
What to watch
Leadership responses and budget allocations over the next 6-12 months. Will organisations convert informal student-style upskilling into formal programs and mentorship? Also monitor regulatory developments from Australia's AI initiatives that could mandate governance standards for enterprise AI.
"The age-based discrepancy in trust around AI means senior decision-makers may inadvertently downplay its potential, limiting the value organisations derive from these tools," said Mark Drasutis, reinforcing that managerial action is the gating factor between experimentation and measurable business impact.
Scoring Rationale
The study is nationally relevant and actionable for enterprise ML teams, highlighting adoption, governance, and talent risks. It is not a technical breakthrough, so importance is midrange. Freshness reduces the score slightly.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


