A comparative East African reflection on artificial intelligence, procedural fairness, and the future of legal drafting

A self-represented litigant in Nairobi used artificial intelligence to draft his pleadings. He reviewed, edited, and adopted every word. He swore no fabricated cases, no false citations. He acted transparently, disclosing his use of AI tools.
Then the High Court of Kenya at Milimani set aside his judgment, called his conduct an abuse of process, and barred him from ever filing any “machine‑generated” pleading in any Kenyan court – unless Parliament first passes a law explicitly allowing AI‑assisted drafting.
That is not judicial caution. It is judicial anxiety in the face of technological disruption.
The Ruling in Brief
In Republic of Kenya, High Court at Nairobi County, Milimani High Court, HCJRMISC/E120/2025 (ruling delivered 16 April 2026), Justice J. Chigiti (SC) considered whether it is legal to draft pleadings using artificial intelligence tools. The respondent/ex parte applicant admitted using what he described as ordinary digital tools, including legal research tools, to assist in writing. He maintained that he had personally reviewed, edited, and adopted every document and remained personally responsible for all factual statements on oath and legal citations. He argued that his pleadings contained no fabricated cases, false citations, or invented quotations, and that, being self‑represented, he had used lawful tools to participate effectively in court.
The Court disagreed. It held that:
· The use of personalised drafting tools, structures and methodologies not provided for under the rules of drafting was “deplorable”.
· Allowing such departures would create a “litigation disaster” leaving judges with no guiding beacons.
· Generating pleadings through unknown tools or AI gives an unfair advantage to the user, amounting to an affront to access to justice under Article 48 of the Constitution.
· The fact the applicant admitted using such tools amounted to an abuse of court.
· The applicant could not “vouch for or verify for the court the truthfulness or accuracy” of AI‑generated pleadings, because that would mean he acted as a judge in his own case, violating natural justice.
On that basis, the Court barred the applicant from filing any other pleadings in any court that are machine‑generated, unless a law is passed in Kenya allowing or providing for drafting using artificial intelligence tools.
The Court did observe that technology is a powerful socio‑economic growth tool when harnessed within a legal framework, and invited the Rules Committee to consider amending the Civil Procedure Rules through public participation to embrace technology and AI drafting rules. But the prohibition stands.
The Flaws in the Judgment
Respectfully, the ruling cannot withstand serious scrutiny. I identify four fundamental errors.
1. The “Procedural Integrity” Error
The Court reasoned that because the Civil Procedure Rules do not mention AI, using AI is unlawful. But the Civil Procedure Rules do not mention laptops, either. They do not mention word processors, grammar‑check software, the delete key, or the backspace button. No judge has ever struck a pleading for being typed rather than handwritten.
Silence in the rules is not a prohibition. It is a gap that the rules themselves empower courts to fill – reasonably, proportionately, and with an eye to justice, not to ritual.
2. The “Unfair Advantage” Error – This One Is Fatal
The Court held that a litigant using AI has an unfair advantage over one who does not, and that this violates equality of arms.
Let us apply that logic consistently.
· Google vs. Law Reports – A lawyer with a smartphone and an internet connection can find authorities in seconds. Another, relying on a dusty shelf of hardbound law reports, takes hours. Is that unfair? No judge has ever said so.
· AfricanLii / KenyaLii – These digital databases make case law searchable, cross‑referenced, and instantly accessible. A litigant without them is at a disadvantage. Has any court called that an affront to Article 48? On the contrary, the Judiciary itself promotes these tools.
· Ulii (Uganda Legal Information Institute) – It now uses AI to summarise judgments. No judge in Uganda has condemned it. No advocate has been barred for citing an AI‑generated summary. The tool is public, free, and welcomed.
· Modern medicine – A patient in a Nairobi teaching hospital has access to MRI scans, robotic surgery, and AI‑assisted diagnostics. A patient in a remote clinic does not. That inequality is real. But no court has banned MRI machines because not everyone can afford them. The answer is to spread the technology, not to ban it.
The Court confused asymmetry with injustice. An asymmetry is unjust only when it is arbitrary (only one side gets the tool), hidden (use is not disclosed), or undermines a core right (such as the ability to test evidence). None of those conditions applied here. The litigant disclosed his AI use. The tools are widely available. And the core right – to present a truthful, coherent pleading – was enhanced, not undermined.
If the Court’s logic were applied consistently, we would still be filing pleadings in quill and ink. The unfair advantage is not in the tool. It is in the refusal to adapt.
3. The “Judicial Capacity” Error
The Court said it cannot “verify” AI‑generated content, so the safer course is to ban it entirely.
But courts never “verify” how a human wrote a pleading. They do not audit pen strokes, interview secretaries, or review dictation logs. They look at the final document. If it contains lies, fake cases, or false citations, they sanction the filer. That same framework works perfectly well for AI.
The Court could have required disclosure, a personal verification oath, and a statement that no fabricated content is included. That is governance, not prohibition. Instead, it chose the nuclear option.
4. The “Parliament’s Prerogative” Error
The Court held that only Parliament, not the courts, can authorise AI use in legal process.
Artificial intelligence is not a controlled substance. It is a tool. Courts do not need a statute to permit the use of search engines, word processors, or online databases. They do not need an Act of Parliament to allow a lawyer to take a typing class.
Mandating a legislative framework for basic productivity software is not judicial restraint. It is jurisdictional abdication.
—
A Constitutional Mirror: Article 159 of the Kenya Constitution
The ruling’s approach sits uneasily with Kenya’s own constitutional framework. Article 159(2)(d) of the Kenya Constitution 2010 commands that “justice shall be administered without undue regard to procedural technicalities.”
Procedure exists to serve justice – not to imprison it. A prohibition on an entire category of drafting tools, without any evidence of misuse, elevates form over substance. That is precisely what Article 159 warns against.
If a self‑represented litigant files a pleading that is truthful, coherent, and personally verified, does the mere fact that an AI assisted in its composition make it less worthy of consideration? The Constitution suggests the answer is no.
—
What the Court Could Have Done – And What Others Are Doing
A more thoughtful, proportionate approach is not only possible; it is already being implemented elsewhere.
In Kenya itself, Justice Bahati Mwamuye recently struck out an AI‑assisted filing – but for procedural defects (missing notice statements, non‑compliant affidavits), not for AI use itself. He gave the litigant leave to refile. That is proportionate. (See AllAfrica, 11 March 2026)
Internationally, Singapore’s State Courts have issued a detailed Guide on the Use of Generative Artificial Intelligence Tools by Court Users (effective 1 October 2024). Lawyers may use AI but remain fully responsible for all content; must fact‑check; must not fabricate evidence; violations may lead to sanctions. No prohibition. Just governance. (Registrar’s Circular No. 9, State Courts of Singapore)
In Estonia, small contract disputes below €7,000 can be decided by an AI judge that proposes a decision; a human judge then reviews and may modify or set it aside. That system has reduced backlog without sacrificing due process. (Law Society Journal, Australia, August 2024)
Even Kenya’s own Chief Justice, Martha Koome, announced in August 2025 that the Judiciary is developing an AI Adoption Policy Framework to guide integration of AI tools while safeguarding judicial independence, data privacy and due process. (Judiciary of Kenya official website, 11 August 2025)
The Chigiti ruling is swimming against the tide of its own institution’s planning.
The correct path is clear:
· Disclosure – A litigant or lawyer using AI to draft pleadings should disclose that fact.
· Verification – The filer must personally review and adopt all content, swearing to its truthfulness.
· Accountability – False citations, fabricated cases, or misleading content remain sanctionable, whether written by a human or generated by a machine.
· No prohibition – The tool itself is not the offence. Misuse is.
The Legal Profession Responds
Prominent Kenyan lawyers have reacted with dismay.
Ahmednasir Abdullahi, SC, one of Kenya’s most respected advocates, wrote on X: “What an absurd decision. Does it matter whether one drafts pleadings using AI tools or uses a typewriter? It is none of the court’s business.” (Nairobi Law Monthly, 21 April 2026)
Steve Biko Wafula, senior counsel, published a detailed critique: “This ruling reads less like modern jurisprudence and more like a judicial panic attack in the face of technological change… The court had a first‑rate jurisprudential problem in its hands and squandered it, trying instead to drag the administration of justice back into the pre‑digital age.” (Soko Directory, 21 April 2026)
These are not fringe voices. They are the heart of the Kenyan bar.
A Word to My Ugandan Colleagues – And to Our Judges
I write from Uganda, where we have not (yet) seen a ruling of this kind. Our judges have quietly tolerated – perhaps even welcomed – the steady digitisation of practice. We use e‑filing, and we cite Ulii’s AI‑generated summaries without panic.
But the same instinct that produced the Chigiti ruling lives everywhere: the fear that the machine will replace the judge, that the algorithm will swallow the advocate, that technology will dissolve the profession’s hard‑won exclusivity.
That fear is misplaced.
AI does not abolish judgment. It does not abolish ethics. It does not abolish the court’s ultimate authority. What AI abolishes is inefficiency – hours spent searching for authorities that software can locate in seconds, repetitive drafting, and the false prestige built around scarcity of technical knowledge.
And perhaps that is what truly frightens some corners of the profession. When information becomes democratised, gatekeepers begin to sweat.
But justice does not belong to the gatekeepers. It belongs to the public. And the public does not care whether a pleading was drafted by candlelight, typewriter, Microsoft Word, or artificial intelligence. The public cares whether justice is accessible, affordable, timely, intelligible, and fair.
If AI helps achieve that mission, then resisting it is not conservatism. It is obstruction. And obstruction disguised as professionalism remains obstruction.
Conclusion: The Future Cannot Be Injuncted
History is littered with institutions that initially resisted the printing press, telephones, computers and the internet – only to later embrace them as essential tools.
Did the world wait for a complete legal framework before embracing mobile money? Did banks issue a constitutional petition before M‑Pesa rewired African commerce? Did Western Union obtain an injunction against digital wallets because “money transfers” had traditionally been their sacred territory? Of course not.
Technology arrived. Society adapted. Regulators followed. That is how civilisation has always moved.
The same will happen with AI in the legal profession. The only remaining question is: will courts lead this transformation – or become footnotes in it?
To our Kenyan brothers and sisters: this ruling is a warning for all of us. Not because Kenya is wrong, but because the same instinct – to fear the machine, to reach for a prohibition when a guideline would suffice – lives in every jurisdiction, including ours. The question is not whether Uganda will face this debate. The question is whether we will face it more wisely.
And to any judge reading this: thank you for your service. But please, do not ban the future. Regulate it, guide it, human‑oversight it – but do not pretend that a tool becomes an abuse simply because it is new.
This time, let us not make the same mistake.
― END ―
Disclaimer: This blog is a critique of a judicial ruling and a contribution to the conversation on technology and legal practice. It is not intended as legal advice, nor as an attack on any judicial officer or institution. The author remains committed to the rule of law, judicial independence, and the responsible integration of technology into the administration of justice.

Enen Ambrose
Member: Judiciary Affairs Committee
Uganda Law Society
For feedback or questions, write to: enen@enenlegalworld.com



















