ANC Pressures Malatsi to Explain AI-Authored Policy

The ANC study group on communications and digital technologies has asked that Communications Minister Solly Malatsi appear before parliament to explain how South Africa20 9s Draft National Artificial Intelligence Policy was produced partly using AI, TechCentral reports. The draft was withdrawn on 26 April after an internal review, which Minister Malatsi said confirmed the reference list contained fictitious sources, a revelation first reported by News24 and covered by Moneyweb and EWN. Imran Subrathie, the study group chief whip, called the episode "one of the most alarming failures of ministerial oversight," TechCentral reports. Portfolio Committee chair Khusela Diko had publicly demanded the draft be withdrawn and redrafted, IOL reports. The public comment period had been scheduled to close on 10 June, TechCentral says.
What happened
Communications Minister Solly Malatsi withdrew the Draft National Artificial Intelligence Policy on 26 April after an internal review found fictitious sources in the document's reference list, Moneyweb and EWN report. Reporting by News24 first flagged the fabricated citations, which prompted the department to open internal questions, Malatsi said in a statement published by Moneyweb. The ANC study group on communications and digital technologies has called for the minister to appear before parliament's portfolio committee to explain how AI was used in drafting the policy, TechCentral reports. Study group chief whip Imran Subrathie called the incident "one of the most alarming failures of ministerial oversight and intellectual rigour," TechCentral quotes. Khusela Diko, chair of the Portfolio Committee on Communications and Digital Technologies, publicly demanded that the draft be withdrawn and redrafted, IOL reports. TechCentral notes public comment had been due to close on 10 June.
Editorial analysis - technical context
Industry reporting indicates the immediate cause was unverified use of generative AI during drafting, producing fabricated academic citations. Generative models are known to produce so-called hallucinations, including invented references and plausible-sounding but false details. For practitioners, this episode highlights the concrete failure mode that occurs when generative outputs are incorporated into policy documents without systematic source verification, citation checking, and human-led quality assurance.
Context and significance
Editorial analysis: Government AI policy is a high-salience public good; credibility problems at the drafting stage can materially delay timelines for regulation and reduce trust in governance institutions. Observers following the sector will note two broader implications: first, national-level policy frameworks are now being produced at the same time as the tools they regulate, which raises unique verification challenges; second, failures of this kind are likely to prompt calls for stricter procedural controls around the use of generative AI in official documents.
What to watch
- •Whether the parliamentary portfolio committee formally summons Minister Malatsi and the timing of any hearing, as requested by the ANC study group (TechCentral).
- •The scope and findings of the department's internal investigation and any consequence-management steps Malatsi announced in his statement, reported by Moneyweb and EWN.
- •The timeline and process for re-releasing a revised draft for public comment, and whether the department adopts explicit verification controls for references and evidence cited.
Practical takeaway for practitioners
Industry context
Public-sector teams and consultants drafting high-stakes policy should treat generative outputs as provisional material requiring independent source verification, robust citation auditing, and human sign-off before publication. The South Africa case will likely feature in future compliance and governance checklists as an illustrative failure mode.
Scoring Rationale
This story matters to practitioners because it concerns national AI governance and a concrete failure mode of generative models in policymaking. The episode is notable but not paradigm-shifting; it will influence verification practices and public-sector procurement of AI tools.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


