DECEMBER 19 — Generative artificial intelligence (GenAI) is a subfield of AI that leverages machine learning and deep learning techniques to generate new content including text, image, audio and video. It is applied in various industries such as healthcare, finance, entertainment, legal practice and education to increase efficiency and productivity.

The legal profession, for instance, is increasingly using GenAI to draft, review, summarise or translate documents and conduct legal research at the preliminary stage, thus lightening the burden of laborious and time-consuming tasks of the legal profession.

GenAI, which uses the Large Language Model (LLM) such as ChatGPT, is built using transformer architecture trained on extensive corpora of textual data. 

It operates based on predictive text generation, which is predicting the probability of a specific word next in sequence rather than discerning the facts or comprehending its meaning.

Therefore, ChatGPT and similar GenAI tools are not immune from hallucination, a phenomenon where LLMs produce inaccurate, unsubstantiated but plausible-seeming information or references in response to the queries. 

If lawyers cite or rely on non-existent case law in the delivery of legal service and in the court without meticulous oversight, this could lead to wrong legal advice, undermine the integrity of the judicial system and erode public trust in the legal profession and the administration of justice.

Likewise, academics who quote fabricated case law or literature in their articles would suffer damage to their academic credibility and face the risk of having their articles retracted by the journals.

In September 2025, the High Court of Singapore delivered a judgment in the case of Tajudin bin Gulam Rasul and Another v Suriaya bte Haja Mohideen, ordering the claimants’ counsel who cited a fictitious case generated by a GenAI tool in the claimants’ written submission to pay the costs of S$800 personally to the Defendant.

The counsels of both the Claimants and the Defendant were also directed to provide their clients a copy of the Judge’s direction in view of the gravity of the improper conduct of the counsel. 

This was because, being an officer of the court, an advocate and solicitor is “entrusted with the solemn duty of assisting the court in the administration of justice.” They must ensure the materials placed before the court are true and accurate.

While GenAI may offer convenience and expedite legal tasks, the author says legal practitioners and academics should treat it with care as a tool that generates preliminary draft or information which requires verification, rather than the end product. — Unsplash pic
While GenAI may offer convenience and expedite legal tasks, the author says legal practitioners and academics should treat it with care as a tool that generates preliminary draft or information which requires verification, rather than the end product. — Unsplash pic

Similar misuse of generative AI in legal practice is also seen in other jurisdictions. For instance, in Mata v Avianca, Inc (2023, a United States case), two attorneys were sanctioned with a penalty of US$5,000 for citing non-existent judicial decisions generated by ChatGPT in the Affirmation of Opposition document.

In People v Crabill (2023, a United States case), an attorney filed a motion citing fabricated cases generated by ChatGPT without reading the cases. 

Before the hearing of the motion, he discovered that those cases cited were fictitious but he did not alert the court nor withdraw the motion. 

Subsequently, the attorney was suspended for a year and a day for his professional misconduct.

To avoid undesirable incidents of citing fictitious authorities in the delivery of legal services, submission in the court or submission to academic journals, it is important to use GenAI responsibly and play a gatekeeping role to verify the information generated.

In fact, the Bar Council Malaysia’s Circular No. 342/2023 titled “The Risks and Precautions in Using Generative Artificial Intelligence in the Legal Profession, Specifically ChatGPT” emphasises the precautions to take, namely always check the outputs produced by ChatGPT or other GenAI tools against traditional legal databases.

Although GenAI has improved over time, legal hallucination remains a challenge for legal practitioners and academics who use it for preliminary legal research. 

A study published in 2024 by a team of researchers from the United States (Dahl et al 2024) revealed that legal hallucinations were pervasive, with the rate ranging from 69 per cent to 88 per cent in response to legal queries when they tested more than 200,000 queries against each of the GPT 3.5, Llama 2 and PaLM2.

Another research was conducted by a group of researchers in the United States to assess the reliability of the leading AI legal research tools. 

In their paper published in 2025, it was reported that the hallucination rate was between 17 per cent and 33 per cent of the time (Magesh et al. 2025).

In summary, while GenAI may offer convenience and expedite legal tasks, legal practitioners and academics should treat it with care as a tool that generates preliminary draft or information which requires verification, rather than the end product. 

Rigorous vetting processes should be taken to ensure accuracy before the cases or articles cited can be presented to the court and clients, or quoted in articles for publication purposes.

* Kuek Chee Ying is a senior lecturer at the Faculty of Law, Universiti Malaya, and may be reached at [email protected]

**This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.