NZ experts criticize government's AI strategy
The RNZ opinion piece by Chris McGavin, Dr Andrew Lensen and Dr Cassandra Mudgway argues New Zealand's national AI Strategy is underwhelming and insufficiently focused on harms beyond productivity and efficiency. The authors note it has been nearly a year since the Government released the Strategy and about nine months since they and other New Zealand AI experts sent an open letter calling for AI regulation and a responsible AI entity, as reported by RNZ. The piece catalogues harms the authors say are linked to AI, including teenage self-harm encouragement, chatbot-related delusion, chatbots assisting planning of mass killings, and a surge in Child Sexual Abuse Material (CSAM). The opinion also highlights the Government's decision not to send an observer to this year's Responsible AI in the Military Domain Summit, and states that of the political parties contacted only The Green Party signed the letter, per RNZ.
What happened
The RNZ opinion piece by Chris McGavin, Dr Andrew Lensen and Dr Cassandra Mudgway reports that nearly a year has passed since the Government released its AI Strategy, and about nine months since the authors and other New Zealand AI experts published an open letter calling for AI regulation and a responsible AI entity. The article lists a range of harms the authors link to AI, including teenagers being encouraged to commit suicide, instances of chatbot-related delusion and psychosis, chatbots assisting planning of mass killings, and an increase in Child Sexual Abuse Material (CSAM) and non-consensual sexualised images. The piece notes the Government did not send an observer to this year's Responsible AI in the Military Domain Summit, and reports that of the political parties contacted only The Green Party signed the open letter, per RNZ.
Editorial analysis - technical context
Industry observers have seen similar critiques in other jurisdictions where rapid AI adoption outpaces governance. Democracies debating AI policy commonly focus on creating cross-party frameworks, independent oversight bodies, and participation in multilateral fora to align on military and safety norms. Attendance at specialist summits, such as those on AI in the military domain, is often used by states to signal engagement with ethical and operational norms, while gaps in domestic policy can increase compliance uncertainty for researchers and vendors.
Context and significance
For practitioners: unclear or narrowly framed national strategies tend to leave ambiguity around data governance, risk assessment standards, and procurement controls. That ambiguity can affect institutional review boards, research partnerships, and vendor due diligence. For policy watchers: the article frames a domestic political gap in cross-party engagement on AI regulation, which shapes the near-term prospects for formal regulatory instruments.
What to watch
Indicators to follow include official updates to the Government's AI Strategy, any public responses from Ministerial offices, whether New Zealand increases participation in international AI safety and military-domain forums, and moves by political parties or cross-party groups to advance regulation or an independent oversight entity.
Scoring Rationale
This is a nationally focused critique of AI governance rather than a new technical or regulatory development. It matters to practitioners operating in New Zealand because policy gaps create compliance and collaboration uncertainty, but its immediate operational impact is limited.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

