Clear Legal

View Original

When your lawyer is a bot

ChatGPT is the latest in a long list of AI or Google-based “legal” services.  As a lawyer since 1990, I have had many clients, acquaintances, Reddit posters, etc., tell me what their own researches have told them the law is.  They are nearly always completely wrong.  Sometimes they are partly right, but wrong enough to destroy any chance of success.

There are “online legal will kits”, “do your own divorce” kits, lots and lots of forms and templates.  All garbage.

Law is not a cookbook.  Many laws differ place by place: across Canada, family law, property law, contract law can vary.  In the USA, even criminal law can vary state by state.  Google articles are not written by practicing lawyers.  Here’s an example: I Googled: “Does a person’s debt go away when they die?” the top Google reply was: “As heir, certain possessions, investments or other assets can be passed down to you, but the debts, which are never mentioned in a will, also become your responsibility. It's why it's very important to evaluate the assets and debts of an estate before accepting an inheritance.”  This is the advice posted by one of the largest “Licensed Insolvency Trustees” in the world.  Their statement is complete b.s.  No one is liable for someone else’s debts.  No one.  Not ever.

The gullible Googler would get a b.s. answer – and have no way to test that.

A lawyer friend of mine keeps testing ChatGPT.  He asks it to write an argument on a legal point, referring to actual cases.  In seconds ChatGPT produces a beautiful argument.  Except, some of the “cases” it quotes don’t exist.  My pal challenges the computer: “I can’t seem to find that case.”  Then it gives him a complete “copy”, with headings, etc. that look exactly like real cases.  Again, my buddy can’t find it in the court record, and tells ChatGPT.  At which point the AI “admits” it made up the case.

Let’s say you use a bot as your lawyer.  You go to court with your AI-generated argument.  The judge asks you to explain a little more about something.  (Every judge does this.  They don’t just read essays – they want to discuss the issues with a person in front of them.)  What do you do?  Ask Alexa?  You could say that the argument is from a bot.  I’m pretty certain the judge would throw you out of the room – maybe into jail.

Let’s say you lose.  The judge rules that your arguments are nonsense.  Now you lose your case, AND have to pay legal expenses to the winning side.  Who will you sue for negligence?

Let’s say you win your case.  Yay!  The Machine Lords rejoice.  But then the other side appeals.  Oh dear.  Do you think your AI will produce a different argument – one for the Appeal Court?  Or will it repeat the same argument as in the court below.  Now you’ll have THREE judges questioning you.  Danger, Will Robinson!

The Law is not purely logical.  Law is based on human experience: messy, sometimes illogical, emotional, and above all, based on morality.  It would be logical to kill the people in the grocery store line up ahead of you.  You’d get through the checkout quicker.  But (I hope) your sense of morality would prevent you. 

Bots don’t have morality.  They are not programmed to find truth or morality.  They are given initial programming, then left to evolve in their own silos of machine reasoning.  None of which is examinable.

Legal arguments are not just exercises in pure logic.  They always contain moral analysis.  We say: “No wrong without a remedy”, “best interests of the child”, “fair and reasonable”, “just result”.  How does a bot analyze any of those?  How can a machine apply morality?  Have you seen 2001 A Space Odyssey?  How about Terminator?

A Clear Legal, we have been human since before 1990.