AI use in Canadian courtrooms carries risk of errors, penalties: lawyers – National

Previously, if a client who usually preferred to communicate through short emails suddenly sent a long message that looked like a legal brief, Ron Shulman suspected that he had received help from a family member or partner. Now a Toronto family lawyer is asking clients if they have used artificial intelligence.

And in most cases, he says, the answer is yes.


Click to watch video: Tech Talk: AI-Generated Court Document Is Filled with Errors


Tech Talk: AI-generated court document is riddled with errors


Almost every week, his company receives messages written or transmitted by artificial intelligence. Shulman says he's noticed this change in the last few months.

Story continues below advertisement


Click to watch video:


Japanese woman “married” AI character ChatGPT


While AI can efficiently summarize information or organize records, some clients seem to rely on it “as a kind of superintelligence,” using it to decide how to proceed in their case, he said.

“This poses a significant challenge” because AI is not always accurate and often agrees with the person using it, Shulman said in a recent interview.

Some people are now also using AI to represent themselves in court without a lawyer, which can delay trials and increase legal costs for others as parties wade through mountains of AI-generated material, he said.

As AI permeates more and more aspects of everyday life, it is increasingly making its way into the courts and legal system.


Click to watch video: Be careful with AI therapy, experts say


Be careful with AI therapy, experts say


Materials created using platforms like ChatGPT have been submitted to courts, tribunals and boards across Canada and the United States over the past few years, sometimes leaving lawyers or those navigating the justice system on their own confused by so-called “hallucinations” – references that are incorrect or simply made up.

Story continues below advertisement

In one notable case, a Toronto lawyer is facing criminal contempt charges after including cases made up by ChatGPT in her filing earlier this year and then retracting it under questioning from the presiding judge. In a letter to the court months later, the lawyer said she had misrepresented what happened “out of fear of possible consequences and sheer embarrassment.”

Get the day's top news, political, economic and current affairs headlines delivered to your inbox once a day.

Receive daily national news

Get the day's top news, political, economic and current affairs headlines delivered to your inbox once a day.

AI hallucinations can have both financial and reputational costs.

In the fall, a Quebec court imposed a $5,000 sanction on a man who turned to generative artificial intelligence to help prepare his documents after splitting with his lawyer. Shortly after, the Alberta Supreme Court ordered an additional $500 costs against a woman whose statements included three fake influencers, warning that self-represented parties could expect “more significant penalties” in the future if they do not follow the court's guidance regarding AI.


Click to watch video: 'ChatGPT Helped': Kenyan Trafficking Victim Reveals Role of Artificial Intelligence in Myanmar Scam Centers


'ChatGPT helped': Kenyan trafficking victim reveals AI's role in Myanmar scam centers


Courts and professional organizations in several provinces have issued guidelines on the use of AI, with some, including the Federal Court, requiring people to declare when they have used generative models.

Story continues below advertisement

Some lawyers who have used or encountered artificial intelligence in their work say it can be a useful tool if used wisely, but if used incorrectly, it can compromise privacy, impede communication, undermine trust and increase legal costs, even if no financial penalties are imposed.

Xenia Chern McCallum, a Toronto-based immigration lawyer licensed to practice in both Canada and the U.S., said she is seeing more people coming forward with research or even filling out AI-powered applications that they then want her to review.

In other cases, clients use AI to “fact-check” her, running documents she has prepared through the platform, potentially revealing their personal information as well as undermining their trust in her work, she said.

“This can put a lot of strain on client relationships because if I instruct my client to do something and they doubt me or say, 'Well, I don't think I need to or why should I do that?' and they resist… then how am I supposed to represent you and your interests? McCallum said.

“AI can explore the Internet and generally tell you what is part of that process, but my experience and my knowledge of what works and what doesn't work in those processes is something that AI won't be able to pick up.”


Click to watch video:


The study found that ChatGPT responded in a dangerous manner to users seeking advice more than half of the time.


Online forums for those dealing with immigration issues are also encouraging people to use AI to prepare documents and save on legal fees, she said.

Story continues below advertisement

“They provide this material and then the court says, 'Okay, we see that you used AI, you didn't disclose it. But not only have you not disclosed it, you're actually citing cases that don't exist, you're citing pathways that don't exist, you're citing law that's not relevant,” McCallum said.

“People are actually being awarded legal fees because they come to court representing themselves, thinking that the AI ​​is going to put these beautiful facts together for them, but not knowing that that's not what's going to happen.”


Trying to save money with AI can sometimes backfire, says Shulman, the family lawyer.

A client recently sent in more than five or six pages of AI-written material about eminent domain—the right of a married couple to remain in the matrimonial home—essentially instructing the firm to include it in the court filing, he said. Problem? The client was not married, so it didn't matter.

“You've just spent half an hour…of training camp reading something (when) there's no use starting with it,” he said.

Shulman said he now has a basic disclaimer that he gives to clients, letting them know that he must read everything they send. He also encourages clients to ask him to explain legal concepts rather than turning to AI—or at least to let him show them how to use AI more effectively.

Story continues below advertisement

According to Jennifer Leitch, executive director of the National Self-Represented Litigants Project, there is a need for this type of guidance and information.

Last month, the organization held a webinar to help those without lawyers use AI correctly and safely in their cases, with about 200 people attending, she said, adding that more sessions are planned for the new year.

Leitch said she sees it almost as a form of harm reduction: “People are going to use it, so let's use it responsibly.”

Her advice includes checking all cases cited by AI to ensure they exist and are cited correctly, as well as looking for court guidance on AI and following document length limits.


Click to watch video:


Sounds like 'Her': Scarlett Johansson claims OpenAI copied her voice


Artificial intelligence has the potential to improve access to justice by allowing people to access vast amounts of information and provide assistance in organizing their cases, but it is currently “a bit of a Wild West,” especially when it comes to reliability, Leitch said.

Story continues below advertisement

“For lawyers in law firms, there are amazing artificial intelligence programs that help with practice management, research, drafting, but it's all kind of behind a paywall,” she said.

“But open source stuff… is less reliable, and you run the risk of hallucinations, bugs, etc. that don't exist in programs behind a paywall.”

Law firms will have to use some form of artificial intelligence to be competitive, says Ninesh Kotak, a personal injury and long-term disability lawyer in Toronto.

The key, he said, is for lawyers to review and correct what AI produces, and ensure compliance with privacy, data security and professional standards.

Ultimately, he said, AI is a tool and cannot replace legal judgment, ethical obligations and human understanding.

Leave a Comment