Canadians Embrace AI but Risk Public Tools for Taxes
A new H&R Block Canada survey finds Canadians broadly open to using AI at home, work, and even in personal relationships, especially younger cohorts. The survey flags a risky behavior: many people input sensitive financial and personal data into public AI systems such as ChatGPT, Google Gemini, and Microsoft Copilot when seeking tax or financial guidance. Those open tools can hallucinate, provide outdated or jurisdictionally incorrect tax advice, and expose data to third parties. The clear practical takeaway for practitioners and informed users is to avoid submitting confidential tax data to public models and to prefer licensed tax software, closed enterprise AI, or professional advisors for filing.
What happened
H&R Block Canada released a nationwide survey showing Canadians are receptive to applying artificial intelligence across domestic and professional domains, but many are using free public AI services to assist with personal finances and tax filing. The survey highlights a cautionary gap: consumers are conflating accessibility with accuracy and privacy when they turn to public models such as ChatGPT, Google Gemini, and Microsoft Copilot for tax help.
Technical details
Public, general-purpose models are not tuned to maintain current, jurisdiction-specific tax code or to guarantee non-disclosure of submitted data. Key technical failure modes are well understood for practitioners: hallucinations where the model fabricates rules or numbers, stale knowledge bases that miss recent legislative changes, and data handling pipelines that may log prompts for model improvement. Practical risks include incorrect deduction calculations, misapplied credits, and accidental sharing of SINs or bank details with third-party training pipelines.
Observed risky behaviors:
- •Users feeding personally identifiable information and financial specifics into public chat interfaces
- •Expecting step-by-step, compliance-safe tax guidance from non-certified models
- •Assuming a conversational model will warn or correct tax filing errors automatically
Context and significance
This survey reinforces two ongoing trends in applied AI. First, consumer adoption continues to outpace education on model limitations, increasing operational risk in high-stakes domains like personal finance. Second, it underscores why organizations are accelerating deployment of closed enterprise systems or privacy-preserving models for regulated use cases. For data scientists and ML engineers, the result is a reminder to bake provenance, explainability, and up-to-date rule integration into any financial assistant. For product teams, the finding intensifies pressure to provide explicit guardrails, user warnings, and structure input/output schemas when exposing models to sensitive domains.
What to watch
Expect more advisories from financial services and regulators, and growing demand for tightly controlled, auditable AI workflows for tax and finance. Practitioners should prioritize data handling policies, differential privacy or on-prem inference, and automated checks against authoritative tax rule engines when building financial assistants.
Scoring Rationale
The survey highlights important consumer risk behaviors and reinforces best practices for handling sensitive data with AI, but it does not introduce new technical findings. This makes it practically relevant for product, security, and compliance teams but not industry-shaping.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


