Malatsi Withdraws South Africa Draft AI Policy

Communications and Digital Technologies Minister Solly Malatsi withdrew South Africa's draft National Artificial Intelligence Policy after media reporting showed the document cited fictitious or unverifiable references. According to News24, several citations appear to be AI-generated hallucinations; Citizen reported that some of the 67 references in the draft either do not exist or point to non-peer-reviewed items. Per ITWeb, Malatsi said in a statement that internal questions were opened and that "the most plausible explanation is that AI-generated citations were included without proper verification." The Department of Communications and Digital Technologies has said it is reviewing the reference list, per Citizen.
What happened
Communications and Digital Technologies Minister Solly Malatsi has withdrawn the draft National Artificial Intelligence Policy, according to ITWeb and TechCentral reporting. News24 reported that several references in the draft appear to be AI-generated hallucinations. Citizen reported that some of the 67 references listed in the draft either do not exist or refer to articles not published in recognised journals. ITWeb cites a ministerial statement that internal questions were initiated and that the draft "contains various fictitious sources in its reference list." ITWeb also quotes the minister: "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened."
Technical details
Editorial analysis - technical context: Automated text-generation tools are known to produce plausible-sounding but fabricated citations when prompted to supply references. Industry reporting in this case characterises the suspect entries as likely AI hallucinations (News24, TechCentral). The problem typically arises when model outputs are used verbatim without human verification of bibliographic metadata or source existence.
Context and significance
Industry context
The withdrawal affects a national policy document that, per ITWeb reporting, was published for a 60-day public consultation after being released in April. Public trust in policy-making processes depends on verifiable evidence and transparent sourcing, and media coverage here focuses on credibility lapses in a high-profile regulatory artifact. For practitioners, the episode illustrates the reputational and procedural risks when generative tools are used in drafting technical or policy materials without rigorous source checks.
What to watch
- •Editorial analysis: Whether the Department of Communications and Digital Technologies publishes a corrected reference list and a transparent audit of the drafting process, as Citizen reported the department said it was reviewing references.
- •Editorial analysis: If the department or Cabinet provides details on consequence management or changes to vetting procedures; ITWeb quotes the minister indicating that consequence management will follow.
- •Editorial analysis: Revisions to the draft's timeline for public consultation and any formal reissue of the policy framework, which ITWeb and TechCentral note had been opened for comment in April.
Practical note for practitioners
Industry context
Teams using generative models for literature synthesis or drafting should treat model-produced citations as provisional. Best practice in comparable professional contexts is explicit verification of each citation against primary sources before publication. This incident is a concrete example of why that verification step matters for policy and technical documents.
Scoring Rationale
The story affects national AI governance and practitioner trust in policy documents. It is notable for exposing a common generative-AI failure mode with direct regulatory consequences, making it important for ML practitioners and policymakers.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

