News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

Submit content

My Account

Advertise with us

AI hallucinations and bogus citations: Attorneys, you've been warned

A recent courtroom drama in the United States highlighted the perils of blindly trusting generative AI. A California judge fined two law firms a combined $31,000 after discovering that their legal brief was riddled with bogus AI-generated research, including non-existent case citations.
Image source:
Image source: Freepik

In his scathing order, the judge conceded that he had at first been intrigued by the supposed authorities, until he tried to locate them and realised they did not exist. “That is scary,” he wrote, noting that if he had not double-checked, the phantom cases might have found their way into a judicial decision.

This incident has swiftly become a cautionary tale for lawyers: i) it shows how convincingly an AI tool can fabricate information, and ii) how failing to verify AI output can lead to embarrassment and sanctions. The message is clear – irrespective of how advanced the tool, human diligence remains essential.

A similar calamity has already unfolded in our own courts. On 2 July 2025, Acting Judge DJ Smit of the Gauteng High Court, Johannesburg, found that two cases cited in urgent heads of argument simply did not exist.

Counsel admitted the citations had come from an online subscription tool, “Legal Genius”, marketed as being trained solely on South African judgments. The judge criticised the legal team for relying on unverified AI output, warned of grave repercussions for the administration of justice, and referred the matter to the Legal Practice Council for investigation.

This story of a counsel tricked by AI-generated citations also flags a broader danger. If generative AI can produce plausible but fictional content, what happens when it produces material that is real yet copied from someone else’s work? Who shoulders the risk if that content infringes copyright or misleads a court? These questions take us into a rapidly developing legal frontier that South African lawyers, businesses and creators cannot afford to ignore.

When generative AI copies instead of creates

While attorneys grapple with AI’s tendency to hallucinate cases, a different controversy is gathering pace, this time over copyright. In June 2025, news emerged that Meta’s flagship model, LLaMA, could reproduce lengthy passages from popular novels verbatim. Stanford scholar Mark Lemley described the model as behaving like a giant zip file of copyrighted text.

If accurate, Meta faces more than an academic headache. Reproducing substantial parts of a protected work without permission is infringement, plain and simple, and authors have already sued Meta, alleging that the model was trained on a trove of pirated books. Lemley’s team estimated that even a 3% infringement rate could expose Meta to nearly one billion dollars in damages.

For South African practitioners who use AI to draft documents, generate marketing copy or create art, the natural question is how our law would handle such situations. That requires a look at South Africa’s copyright framework and how it treats AI-generated works.

Authorship and ownership under South African law

South African copyright law is rooted in the Copyright Act 98 of 1978. A 1992 amendment already contemplates computer-generated works by defining the author of such a work as the person who undertook the arrangements necessary for its creation. Put plainly, when an AI produces a creative work, the human who set the parameters or supplied the prompts is deemed the author.

South African courts distinguish between computer-assisted works, where the human remains the creative driver, and computer-generated works, where human input is minimal. The stricter the human involvement test, the more likely the user will qualify as the author.

This approach sits alongside the general originality requirement that a work must reflect the author’s own skill, effort and judgment. Using AI does not sidestep that rule. The user must still show that the output reflects meaningful intellectual input rather than being wholly machine-driven. In short, South African law can recognise AI-assisted output as copyright-protected, but a human creative contribution is indispensable. The AI itself is merely a tool.

Some foreign systems differ. United States courts and the Copyright Office maintain that a non-human entity cannot be an author. The recent decision in Thaler v Perlmutter (2025) reaffirmed that rule, echoing the famous “monkey selfie” case in which a photograph taken by a monkey attracted no copyright at all.

South African law, by contrast, fills the authorship gap by allocating rights to the human arranger, a nuance that anticipated the AI puzzle decades ago.

The Copyright Amendment Bill: an AI blind spot?

The pending Copyright Amendment Bill, first tabled in 2017, is the most ambitious overhaul of South African copyright in years, yet it remains largely silent on AI-generated content.

It does propose a broad fair use exception modelled on United States law, potentially easing activities such as text and data mining for AI training. That would align South Africa with jurisdictions like the European Union, which in 2019 introduced specific data mining exceptions.

Still, the Bill does not clarify how much human input is required for protection or who is liable when AI output infringes rights. As of mid-2025, the Bill has again passed Parliament but awaits constitutional scrutiny. For now, the 1978 Act, possibly supplemented by fair use when enacted, must stretch to cover AI issues.

The Anthropic decision: transformative training and pirated libraries

On 23 June 2025, United States District Judge William Alsup delivered the first fully reasoned judgment on whether feeding copyrighted books into a large language model counts as fair use.

In Bartz v Anthropic, he held that training Claude on lawfully acquired texts was “spectacularly transformative” because the model learns patterns of language rather than republishing the books. Judge Alsup praised AI training as “among the most transformative innovations many of us will see in our lifetimes,” echoing earlier fair use decisions such as Authors Guild v Google.

Yet the judge drew a firm line at Anthropic’s 90TB cache of books scraped from shadow libraries. He refused to excuse what he labelled “theft”, ruling that acquiring and storing pirated works is infringing even if the subsequent training use is transformative. With over seven million books involved, the theoretical damages run into the trillions.

The decision thus offers a Solomonic split: i) AI training can be fair use, but ii) only when the inputs are lawfully sourced. Two days later, Judge Vince Chhabria reached a parallel conclusion in Kadrey v Meta, reinforcing a nascent judicial consensus that provenance is paramount.

For South Africa, where no broad fair use defence yet exists, Bartz is instructive. If United States courts permit AI training provided the data is clean, while punishing pirated inputs, our more restrictive regime could deem the entire act infringing unless Parliament enacts clear exceptions.

Either way, the case underlines that a spotless data pipeline is becoming existential for AI developers.

Liability and infringement: Who pays when AI goes astray?

If generative AI reproduces protected content, liability usually falls on the person who reproduces or distributes it.

For example, an architect who publishes an AI-generated logo that copies a famous artist’s design, a lawyer who files an affidavit containing unattributed AI-generated text, or a chatbot that releases chapters from a novel can each infringe copyright. Service providers may also face claims for authorising infringement if their models memorise and output copyrighted material, an issue now being tested in foreign litigation.

Professional liability is equally acute. Lawyers have duties of competence and honesty. Relying on unverified AI output, whether citations or factual assertions, can breach those duties and invite malpractice claims.

The burden, therefore, remains on the human user to vet AI content. South African firms should treat AI output as third-party material: i) confirm ownership, ii) secure permission where required, and iii) verify accuracy before publication.

Proceeding with caution: advice for legal professionals and creators

Never accept AI output at face value. Verify sources and run originality checks to protect your reputation. Understand the terms of any AI platform you use, paying close attention to ownership and licensing clauses.

Safeguard client confidentiality; remember that many platforms retain input data for further training. Until fair use becomes law, remain within existing fair-dealing exceptions or obtain permission before reusing protected material.

Above all, embrace AI’s efficiency while retaining human oversight. Generative AI is an assistant, not an autonomous agent, and ultimate responsibility still rests with the human in control.

Generative AI is not going away. Its role in research, drafting and creative work will only grow. South African practitioners can either shun it as a risky black box or engage with it thoughtfully, guided by the 1978 Act, alert to the coming Amendment Bill and disciplined about verifying provenance.

By pairing technological curiosity with professional scepticism, we can harness AI’s speed and insight while safeguarding the originality that copyright seeks to reward.

The message to clients and colleagues is straightforward: i) embrace the tool, ii) respect the law, and iii) remember that in the AI era, as in centuries past, sound judgment remains a human art.

About Tim Laurens

Tim Laurens is an Associate at KISCH IP.
Related
More news
Let's do Biz