> Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn’t know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. [...] “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”
Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.
> He thinks it is unrealistic to expect lawyers to stop using AI.
I disagree. It worked until now, and using AI is clearly doing more harm than good, especially in situations where you hire an expert to help you.
Remember, a lawyer is someone who actually has passed a bar exam, and with that there is an understanding that whatever they sign, they validate as correct. The fact that they used AI here actually isn't the worst. The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.
We can make the argument that this might be pushed from upper management, but remember, the license is personal. So it's not that they can hide behind such a mandate.
It's the same discussions I'm having with colleagues about using AI to generate code, or to review code. At a certain moment there is pressure to go faster, and stuff gets committed without a human touching it.
Until that software ends up on your glucose pump, or the system used to radiate your brain tumor.
I disagree with your disagreement. The legal profession is not "working until now" unless you're quite wealthy and can afford good representation. AI legal assistants will be incredibly valuable for a large swath of the population -- even if the outputs shouldn't be used to directly write briefs. The "right" answer is to build systems to properly validate citations and arguements.
Lawyer here. I'm not sure why you think AI will fix the first part. What AI does is not a significant part of the cost or labor in the vast majority of kinds of cases. If you have a specific area in mind, happy to go into it with you. The area where this kind of AI seems most likely to reduce cost is probably personal injury.
As for the last sentence, those systems already exist and roughly all sane lawyers use them. They are required to. You aren't allowed to cite overturned cases or bad law to courts, and haven't been allowed for eons. This was true even before the process was automated complety. But now completely automated systems have been around for decades, and one is so popular it caused creation of the word "shepardize" to be used for the task. So this is a double fault on the lawyers part. These systems are integrated well too. Even back in 2006 when I was in law school the system I used published an extension for Microsoft Word that would automatically verify every quote and cite, make sure they were good law and also reformat them into the proper style (there were two major citation styles back then).
It has only improved since then. The last sentence is simply a solved problem. The lawyer just didn't do it because they were lazy and committed malpractice.
Incorrect legal information is generally less beneficial than no information at all. A dice roll of correct or incorrect information is potentially even worse.
Lawyers are required to actually cite properly and check their citations are correct, as well as verify they are citing precedent that is still good (ie has not been overturned).
This is known as shepardizing.
This is done automatically without AI and has been for decades.
I don't really see how this is any different from checking for work from another human. If a lawyer tasks another staff to do some research for citations, and the staff made up a bunch of them and the lawyer didn't check, that lawyer would be responsible as well. Just because it's AI and not a person doesn't make it less of an issue.
> AI legal assistants will be incredibly valuable for a large swath of the population
In my experience they're a boon to the other side.
Using AI to help prepare your case for presentation to a lawyer is smart. Using it to actually interact with an adversary's lawyer is very, very dumb. I've seen folks take what should have been a slam-dunk case and turn it into something I recommended a company fight because they were clearly using an AI to write letters, the letters contained categorically false representations, and those lies essentially tanked their credibility in the eyes of the, in one case, arbitrator, in another, the court. (In the first case, they'd have been better off representing themselves.)
His response is absurd. This is no different than having a human associate draft a document for a partner and then the partner shrugging their shoulders when it's riddled with errors because they didn't bother to check it themselves. You're responsible for what goes out in your name as an attorney representing a client. That's literally your job. What AI can help with is precisely this first level of drafting, but that's why it's even more important to have a human supervising and checking the process.
I came here to quote that exact part of the article.
My guess is that he probably doesn't believe that, but that he's smart enough to try to spin it that way.
Since his career should be taking at least a small hit right now, not only for getting caught using ChatGPT, but also for submitting blatant fabrications to the court.
The court and professional groups will be understanding, and want to help him and others improve, but some clients/employers will be less understanding.
The thing is, this statement is doing as much harm to his reputation as the original act, if not more. Who would hire this lawyer after he said something like that?
> We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
Same with FSD at Tesla, there's many people who think that accidents and fatalities are "worth it" to get to the goal. And who cares if you, personally, disagree? They're comfortable that the risk to you of being hit by a Tesla that failed to react to you is an acceptable price of "the mission"/goal.
I've taken this hallucination issue to heart since the first time this headline occurred, but if you just started with leading LLM's just today, you wouldn't have this issue. I'd say it would be down to like 1 out of 23 at this point.
Definitely keep verifying especially because the models available to you keep changing if you use cloud services, but this September 2025 is not June 2023 anymore and the conversation needs to be much more nuanced.
Frankly I'd argue that something that produces 1 in 23 fake citations may be worse than producing 21 fake citations. It's more likely to make people complacent and more likely to go undetected.
People have more car crashes in areas they know well because they stop paying attention. The same principle applies here.
All citations should have been shepardized. This is standard practice for lawyers for decades. Court rules always require you only cite good law. So you will be excortiated for valid but overturned citations too.
This is actually one of the more infuriating things about all of this. Non-lawyers read this stuff and they’re like oh look it hallucinated some cases and citations. It actually should still have been caught 100% of the time and anyone submitting briefs without verifying their cites is not fit to be a lawyer. It's malpractice, AI or not.
A lot of both defeatist and overly optimistic pro AI comments in here. Having built legaltech and interfaced heavily with attorneys over the last 5 years of my career, I will say that there is a wide spectrum of experience, ethics and intelligence in the field. Blindly copying output from anything and submitting it to the court seems like a mind boggling move, it doesn’t really make a difference if it was AI or Google or Bing or Thompson Reuters. This attorney is not representative of the greater population and probably had it coming imho.
There is definitely benefit to using language models correctly in law, but they are different than most users in that their professional reputation is at stake wrt the output being created and the risk of adoption is always going to be greater for them.
Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?
It's not a large step after that to verify that a quote actually exists in the cited document, though I can see how perhaps that was not something that was necessary up to this point.
I have to think the window on this being even slightly viable is going to close quickly. When you ship something to a judge and their copy ends up festooned with "NO REFERENT" symbols it's not going to go well for you.
Lots of hallucination verification tools exist, but legal tech tools usually charge an arm and a leg. This bloke probably used gemini with the prompt "create law"
Part of an issue is that there's already in existence a lot of manual entry and a lot of small/regional courts with a lot of specificity for individual jurisdictions. Unification of standards is a long way away, I mean, tech hasn't even done it
>Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?
Why would I pay for software what I could do with my own eyes in 2 minutes?
Two minutes per what? Two minutes per citation is a huge time waste. Two minutes per filing full of citations is unrealistically fast and also adds up.
It's not two minutes per citation and the notion that something that is "very expensive" but otherwise doesn't have a use case unless you're using AI in the first place is a ridiculous proposition. To call what I wrote a "huge time waste" seems to really miss the point. The citations are already hyperlinks into "some relevant database". Attorneys figured that out when they started publishing judicial opinions in reporters (books). I take the citation string and it gives me the decision. Wow!
> Two minutes per filing full of citations is unrealistically fast and also adds up
Also adds up to what? The amount of time it takes to do your job? Vs not doing it and just letting AI rip? Oh I see, people here think that paying another service to fix the poor job the "do your job for you" AI service did makes sense? I don't think so. But then again I'm not the one wondering if lawyers keep cases stored in "relevant databases" out of what could only be sheer ignorance (I bet AI could get that one right). Never heard of Westlaw or LexisNexis?
>Then that's the actual answer to the question they were asking. It's already done before you're given the document.
It's not the actual answer. Having a standardized resource location scheme is a solved problem in the legal field for years beyond my knowledge. Well before computers, that's for sure. Getting all the resources cited in any legal brief is probably one of the most trivial task an attorney does, if not a paralegal, and I'm sure there are scripts and apps for it too. You just type the cite in and poof! Out comes the document!
>But I don't know what takes "2 minutes" then? Checking to make sure they actually have hyperlinks?
The task at question. Seeing if "hyperlink" returns a case the citation. It's even more trivial than getting the resource itself. You type it in and don't even have to click the download button. Of course, an attorney actually has to read a case to see if it stands for the proposition they are using it for. But you don't care about that because you're a super genius entertaining a hypothetical.
>That is not what they asked. You only make yourself look bad when you change the question before mocking it.
I'm not the one making myself look bad because I'm not the one entertaining the nonsensical hypothetical of some other internet super genius who knows so much about the legal profession that they've never heard of LexisNexis, WestLaw, the Federal Reporter or the Blue Book.
Legal citations are already a format you can plug into a legal database to get a result, so the idea that it'd be some sort of improvement to see if a citation actually exists when your AI makes up complete bullshit isn't an advancement, it's back to the status quo. Because an attorney needs to actually read the cases they cite to be sure they stand for the proposition they are relying upon. They also need to read the propositions they are representing to the court. So nothing has changed about that. But again, you don't realize that because you have no idea what an attorney needs to do to make their time more valuable. (It's hiring and training junior attorneys.) That's what senior lawyers have been doing forever.
Actual attorneys do the job way better job than AIs. That hasn't changed yet and it doesn't seem like it will anytime soon based upon the AI demos I've been given. The only people who tell me otherwise are posters like you and the hucksters that sell the demos. At least the others are hucksters. You're some poster whose going to argue with me about what will make my job easier, having no idea what I actually spend my time doing. It's always the posters that abstract it into some obtuse operation like "filling out a legal document with facts and relevant law" like they are just two buckets getting sorted. It's so tiresome.
> The task at question. Seeing if "hyperlink" returns a case the citation.
So that's per citation? Then two minutes each is a waste of time for basic checking.
> who knows so much about the legal profession that they've never heard of LexisNexis, WestLaw, the Federal Reporter or the Blue Book.
This accusation is not supported by what they said.
> Legal citations are already a format you can plug into a legal database to get a result, so the idea that it'd be some sort of improvement to see if a citation actually exists when your AI makes up complete bullshit isn't an advancement, it's back to the status quo. Because an attorney needs to actually read the cases they cite to be sure they stand for the proposition they are relying upon.
Again a lot of that comment is about getting a document from someone else and quickly checking validity.
I would suggest to you some pondering on the meaning of the rare punctuation "...?". In your zeal to look good by correcting, you have completely misread the question... and gotten called on it. Digging in deeper isn't helping.
Doesn't seems like there isn't kind of disciplinary action. You can just make up stuff and if you're caught, pay some pocket change (in lawyer money level territory) and move on.
The hallucinations in legal briefs get really out of hand when the attorney wants to make an argument not supported by the case law. The LLM wants to do a good job defending a case, so it invents the legal precident, because otherwise it'd be impossible to make the argument credibly. This invites a rule 11 challenge from the other side where you claim the lawyer is so full of crap with his claim that he deserves sanction for not understanding the law and wasting everyone's time.
What's interesting about the rules of civil procedure is that it has been built up over centuries to prevent all kinds of abuse by sneaky, clever, unscrupulous litigants. Most systems are not so hardened against bad faith actors like the legal system is and AI just thinks it can pathologically lie its way through cause most people trust somebody who sounds authoritative.
I just read the initial complaint, what do you think about that case? Is there a community that wants disclosure of "chatter's" existence? It seems to be going the other way with AI personalities doing the chatting
I know a lawyer who almost took a job in state government where one of the primary duties was to make sure that the punctuation in the bills going through the state legislature were correct and accurate. For her, part of the appeal of the job, was that it would allow her to subtly alter the meaning of a bill being presented. Apparently it is a non trivial skill to be able to determine how judges are likely to rule on cases due to say, the presence, or the absence of an oxford comma.
There was an entire team dedicated to this work, and the hours were insane when the legislature was in session. She ended up not taking the job because of the downsides associated with moving to the capital, so I don't know more about the job. I'd be curious how much AI has changed what that team does now. Certainly, they still would want to meticulously look at every character, but it is certainly possible that AI has gotten better at analyzing the "average" ruling, which might make the job a little easier. What I know about law though, is that it's often defined by the non average ruling, that there's sort of a fractal nature to it, and it's the unusual cases that often forever shape future interpretations of a given law. Unusual scenarios are something that LLMs generally struggle with, and add to that the need to creatively come up with scenarios that might further distort the bill, and I'd expect LLMs to be patently bad at creating laws. So while, I have no doubt that legislators (and lobbyists) are using AI to draft bills, I am positive that there is still a lot of work that goes into refining bills, and we're probably not seeing straight vibe drafting.
Bank lobbyists, for example, authored 70 of the 85 lines in a Congressional bill that was designed to lessen banking regulations – essentially making their industry wealthier and more powerful. Our elected officials are quite literally, with no exaggeration, letting massive special interests write in the actual language of these bills in order to further enrich and empower themselves… because they are too lazy or disinterested in the actual work of lawmaking themselves.
a two-year investigation by USA Today, The Arizona Republic, and The Center for Public Integrity found widespread use of “copycat bills” at both federal and state levels. Copycat legislation is the phenomenon in which lawmakers introduce bills that contain identical langauge and phrases to “model bills” that are drafted by corporations and special interests for lobbying purposes. In other words, these lawmakers essentially copy-pasted the exact words that lobbyists sent them.
From 2011 to 2019, this investigation found over 10,000 copycat bills that lifted entire passages directly from well-funded lobbyists and corporations. 2,100 of these copycat bills were signed into law all across the country. And more often than not, these copycat bills contain provisions specifically designed to enrich or protect the corporations that wrote the initial drafts
I mean, we've seen laws that were written by lobbyists with zero changes. Does it matter if it was AI generated or not at that point? The congress critters are not rewriting what they've been told to do if they've even read it after being told what to do.
This is why there are certain jobs AI can never take: we are wired for humans to be responsible. Even though a pilot can do a lot of his work via autopilot, we need a human to be accountable. For the pilot, that means sitting in the plane. But there are plenty of other jobs, mostly high-earning experts, where we need to be able to place responsibility on a person. For those jobs, the upside is that the tool will still be available for the expert to use and capture the benefits from.
This lawyer fabricating his filings is going to be among the first in a bunch of related stories: devs who check in code they don't understand, doctors diagnosing people without looking, scientists skipping their experiments, and more.
> This is why there are certain jobs AI can never take
You're thinking too linearly imo. Your examples are where AI will "take", just perhaps not entirely replace.
Ie if liability is the only thing stopping them from being replaced, what's stopping them from simply assuming more liability? Why can't one lawyer assume the liability of ten lawyers?
People who think like this cannot be convinced; they're unaware of the acceleration of the rate of progress, and it won't change until they clash with reality. Don't waste your time and energy trying to convince them.
They don't understand how to calibrate their model of the world with the shape of future changes.
The gap between people who've been paying attention and those who haven't is going to increase, and the difficulty in explaining what's coming is going to keep rising, because humans don't do well with nonlinearities.
The robots are here. The AI is here. The future is now, it's just not evenly distributed, and by the time you've finished arguing or explaining to someone what's coming, it'll have already passed, and something even weirder will be hurtling towards us even faster than whatever they just integrated.
Sometime in the near future, there won't be much for people to do but stand by in befuddled amazement and hope the people who set this all in motion knew what they were doing (because if we're not doing that, we're all toast anyway.)
The book https://en.wikipedia.org/wiki/The_Unaccountability_Machine introduces the term "accountability sink", which is very useful for these discussions. Increasingly complicated systems generate these voids, where ultimately no human can be singled out or held responsible.
AI offers an incredible caveat emptor tradeoff: you can get a lot more done more quickly, so long as you don't care about the quality of the work, and cannot hold anyone responsible for that quality.
There you have it --- proof that lawyers love AI --- in more ways than this one example illustrates.
Using a tool that is widely known to be flawed to provide any sort of professional service (legal, medical, accounting, engineering, banking, etc.) is pretty much a textbook definition of negligence.
$10,000? That's a slap on the wrist. I don't say this lightly, this should have been jail time for someone. You're making a mockery of our most sacred institutions.
for first offenses for something like this you'd suggest jail time? hope you find excuses to skip your next jury summons. however, it is typical in jury selection to be asked by the defense if you'd be able to agree with a minimum sentence while the prosecutors like to ask if you'd be able to agree to the maximum. personally was asked if I could agree to 99 years for someone's first offense of GTA. I said no, and was dismissed. Sounds like you'd have said yes.
For someone who had to attend 6+ years of school and had to pass a professional licensing exam with ethics questions? Yes, I do. $10,000 is one week of billable hours at $250/hr.
Do you think a Civil Engineer (PE) should be held liable if they vibe engineered a bridge using an LLM without reviewing the output? For this hypothetical, let’s assume an inspector caught the issue before the bridge was in use, but it would’ve collapsed had the inspector not noticed.
No single civil engineer designs a bridge though now do they? So the premise of your retort is just way off here. No bridge plan is made without reviews after one person presses print on the plotter. Even the construction company hired to build the bridge will review the plans before they break ground. If someone is building a bridge on their private property and hires their nephew, that's on them. An actual civil project, nope, I reject your premise outright.
I would wager that plenty of bridges are designed by a single engineer, most bridges are not massive 8-lane highway bridges but small bridges in municipalities.
A single person can design a building, why not a bridge?
So you’re saying when you run those jobs or the team you sell the job to have no one that’s ever built anything before to see plans and ask questions? So if I accept the premise that a single person designed a bridge that that’s all that would be done to ensure it meets the specifications? You’re saying that nobody would ever review the plans? Nobody would say the bolts being used are too small for purpose, or any number of things that could pop up? The concrete pads are insufficient? The steel I-beams are too thin? Someone would just take the plans exactly as listed, purchase the material as listed, and not one question ever would be raised? I would never trust a construction team that didn’t raise questions if not even to see if they themselves could skimp on material to pocket the difference.
You raise a number of good points, my example wasn’t as strong as I thought. I was attempting to contrive a scenario that was similar to a lawyer using an LLM and not reviewing the output, the civil engineering example isn’t a great due to the issues you raised.
> Someone would just take the plans exactly as listed, purchase the material as listed, and not one question ever would be raised? I would never trust a construction team that didn’t raise questions if not even to see if they themselves could skimp on material to pocket the difference.
You’re right, the contractor would likely catch the design issues if there were any, and possibly before that in the plan review/permitting process if the AHJ is on the ball.
I work in the electrical trade and I (and my electricians) find and correct errors frequently in engineered plans. We tell the engineer if it costs us more money to attempt to get a contract change order, but we keep it to ourselves if we can do it safely for cheaper. A common scenario I run into is a design with oversized feeders where you can use a smaller wire and still meet code, we just pull the smaller conductors and pocket the difference (assuming you bid the project using the larger wire size)
There's something grimly hilarious about knee-jerk demands for jail time for [other profession] for using AI, when a bunch of us here are eagerly adopting it into our own workflows as fast as we can.
Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
I don't think you meant it this way, but it feels like a frank admission that what we do has no value, and so compared to other people who have to be correct, it's fine for us to slather on the slop.
The jail time wouldn't be for using AI. It would be for submitting a document to the court that would have gotten an F in any law school.
Sort of like recklessly vibe coding and pushing to prod. The cardinal rule with AI is that we should all be free to use it, but we're still equally responsible for the output we produce, regardless of the tooling we use to get there. I think that applies equally across professions.
> Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
Programmers generally don't need a degree or license to work. Anyone can become a programmer after a few weeks of work. There are no exams to pass unlike doctors or lawyers.
LLMs are just not the technology for 'search' and cite references, no matter how 'agentic' you make it.
They are great for a large number of other things, like generating code for example.
We are so focused trying to shoehorn LLMs to do something it is fundamentally unsuitable for that we are probably missing the discovery of the technology that would solve this.
Everyone is somewhat missing the point here that the California bar is making.
They don't care if you use an AI or a Llama spitting at a board of letters to assemble your case, you are responsible for what you submit to the court.
This is just a bad lawyer who probably didn't check their work in many other cases, and AI just enabled that bad behavior further.
The American Bar Association, as well as every state bar association, published guidance on GenAI usage at least a year ago. The existing legal and ethical responsibilities go beyond just being responsible for hallucinations. Client information privacy, accurately tracking time and billing only for time actually spent on the case, etc.
As time goes on, it becomes less and less defensible to see this stuff.
This. Lawyers can use AI tools. But an attorney is ultimately responsible for everything that goes out the door. So they had better check the output from their AI tool carefully. In most firms (at least most good ones) someone would check every citation that goes out the door, even if it was written by an experienced attorney, so it speaks volumes that someone would fail to check citations generated by an LLM.
> He thinks it is unrealistic to expect lawyers to stop using AI. It’s become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution.
I think this is a good reason for fines to not be incredibly big. People are using AI all the time. There will be a growing period until they learn of its limitations.
The slope of the learning curve can be adjusted though with the level of fines being a big adjustment. Only I'd suggest not the first time for any one lawyer per firm, but per incident for each firm.
I'm numb to it after many "EU fines Householdnamecorp a zillion doubloons" type headlines, but using "historic fine" to describe $10k to a lawyer feels odd.
> The fine appears to be the largest issued over AI fabrications by a California court
This is a bit like all the stats like "this is appears to be an unprecedented majority in the last 10 years in a Vermont county starting with G for elections held on the 4th when no candidate is from Oklahoma".
Lots of things are historic but that doesn't necessarily mean they're impressive overall. More interesting is how many of these cases have already been tried such that this isn't "historic" for being the first one decided.
expecting the same level of fine to an individual person as opposed to a faceless corp really shows how numb you must be. for an attorney to be fined that much is not normal. TFA even shows example of higher fines issued to law firms, while still not as high as your zillion doubloons hyberbole it still shows the distinction between individual and s/corporation/law firm/. EU fines have been progressively getting higher especially for repeat offenders. it would be unwise to expect different in legal matters
His website makes him look like the owner of a law firm, although I think it's just him? I'm not expecting the same number, but... california issues bigger fines for watering lawns or buying illegal fireworks. For a lawyer, a fine order of magnitude smaller than "hiring a paralegal" is less "historic" and more "cost of doing business, don't get caught"
California issues higher fines for littering than for abandoning an animal on the highway. It's listed right there on the highway signs as you enter the state. It stood out when I saw it for the first time.
Fines are arbitrary numbers set by some people not necessarily knowing about other fines for other offenses.
The legal system is not an API you can spam. You must have credentials that involve passing a very tough exam to even submit to it and going to school for 3 years. If you make a submission that is not fully syntax and reference checked you can face a $10,000 fine and suspension of your license to submit to it. You do not waste the legal system's time with AI slop.
I have worked in litigation and if you submit some litigation to a court clerk that is not perfect you won't even get a reply that it's no good. They'll just throw your brief in the trash without looking at it and never call you back.
$10k is not a high enough fine to dissuade this behavior. Large law firms will just approach this the same way that big tech accepts GDPR fines as part of the cost of doing business.
The article links to the opinion[1], which notes more than once that "the quoted language does not appear anywhere in the opinion," and "Goldstine appears to be a fabricated case." I don't know whether it's easy to get a copy of the complaint in question.
$10,000, wow what a shocker! I don't know how anyone who can afford to live in California could ever expect to pay such a fine. I expect the lawyer will soon have to declare bankruptcy.
It the first of its kind and I would wager is more of a warning shot. 10k on a case may not be much but tens of thousands multiplied by hundreds of cases won't be negligible for a lawyer or a small firm. Not to mention the prospect of loss of case and reputation and consequently future business due to such actions.
I searched for the lawyer's name in the state bar association, they've been practicing for over 13 years. Even has electrical engineering in their background.
It's fun to point and laugh in this scenario where the attorney just threw slop at the court. However, these stories won't be around long.
Tools like Lexis+ from LexisNexis[1] now offer grounding, so it won't be as simple to bust people cutting corners in the future because these prevent the hallucinations.
We're now closer to the real Butlerian Jihad when we see pro se plaintiffs/defendants winning cases regularly.
This is just incredibly defeatist on everyone talking here.
Here we have irrefutable evidence of just how bad the result of using AI would be and yet... the response is just that we need to accept that there is going to be damage but keep using it?!?
This isn't a tech company that "needs" to keep pushing AI because investors seem to think it is the only path of the future. There is absolutely zero reason to keep trying to shoehorn this tech in places it clearly doesn't belong.
We don't need to accept anything here. Just don't use it... why is that such a hard concept.
> Here we have irrefutable evidence of just how bad the result of using AI would be
It's not just the result of using AI, it's the result of failing to vet the information he was providing the court. The same thing could've happened if he hired a paralegal from Fiverr to write his pleadings and didn't check their work.
It's like saying that because he typed it on a computer, it's the computers that are the problem, and we shouldn't keep using them.
We're already at least a year past AI tools having the ability to perform grounding (Lexis+ from LexisNexis, as I cited on another comment in this post, for example), so this whole fiasco is already something from a bygone era.
It's not (just) defeatist, it's fatalistic. And I agree. There can be cultural underpinnings here, too--the attorney is CA-based, and though LA is distinct from SF, I wonder if there isn't a thread of the "move fast and break things" ethos showing up as well.
For so many people, option A is chatgpt lawyer and option B is no lawyer. When the hourly billing of a lawyer approaches the weekly pay of a worker, something’s gotta give
Have LLMs resulted in a democratization of law where anyone can now afford to hire a lawyer? As far as I know, the answer is no. Lawyers who use unreliable tools to generate fake citations are still charging just as much.
The fact that the lawyer is unrepentant and says that it is simply too convenient and "there will be some victims, tough shit", basically, means the fine was a zero or two too short. Deterrence needs to hurt!
AI constantly fabricates references. I found a question recently (accidentally) which would make it fabricate every answer. Otherwise, it tends to only fabricate about a third of them, and somehow miss dozens of others that it should easily find.
After asking for recommendations, I always immediately ask it if any are hallucinations. It then tells me a bunch of them are, then goes "Would you like more information about how LLMs "hallucinate," and for us to come up with an architecture that could reduce or eliminate the problem?" No, fake dude, I just want real books instead of imaginary ones, not to hear about your problems.
detail: The question was to find a book that examined how the work of a short order cook is done in detail, or any book with a section covering this. I started the question by mentioning that I already had Fast Foods and Short Order Cooking(1984) by Popper et al. and that was the best I had found so far.
It gave me about a half dozen great hallucinations. You can try the question and see how it works for you. They're so dumb. Our economy is screwed.
> Mostafavi told CalMatters he wrote the appeal and then used ChatGPT to try and improve it. He said that he didn’t know it would add case citations or make things up. He thinks it is unrealistic to expect lawyers to stop using AI. [...] “In the meantime we’re going to have some victims, we’re going to have some damages, we’re going to have some wreckages,” he said. “I hope this example will help others not fall into the hole. I’m paying the price.”
Wow. Seems like he really took the lesson to heart. We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
21 of 23 citations are fake, and so is whatever reasoning they purport to support, and that's casually "adding some citations"? I sometimes use tools that do things I don't expect, but usually I'd like to think I notice when I check their work... if there were 2 citations when I started, and 23 when I finished, I'd like to think I'd notice.
> He thinks it is unrealistic to expect lawyers to stop using AI.
I disagree. It worked until now, and using AI is clearly doing more harm than good, especially in situations where you hire an expert to help you.
Remember, a lawyer is someone who actually has passed a bar exam, and with that there is an understanding that whatever they sign, they validate as correct. The fact that they used AI here actually isn't the worst. The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.
We can make the argument that this might be pushed from upper management, but remember, the license is personal. So it's not that they can hide behind such a mandate.
It's the same discussions I'm having with colleagues about using AI to generate code, or to review code. At a certain moment there is pressure to go faster, and stuff gets committed without a human touching it.
Until that software ends up on your glucose pump, or the system used to radiate your brain tumor.
> The fact that they blindly signed it afterwards is a sign that they are unfit to be a lawyer.
Yes, this is the crux of it. More than any other thing you pay a lawyer to get the details right.
I disagree with your disagreement. The legal profession is not "working until now" unless you're quite wealthy and can afford good representation. AI legal assistants will be incredibly valuable for a large swath of the population -- even if the outputs shouldn't be used to directly write briefs. The "right" answer is to build systems to properly validate citations and arguements.
Lawyer here. I'm not sure why you think AI will fix the first part. What AI does is not a significant part of the cost or labor in the vast majority of kinds of cases. If you have a specific area in mind, happy to go into it with you. The area where this kind of AI seems most likely to reduce cost is probably personal injury.
As for the last sentence, those systems already exist and roughly all sane lawyers use them. They are required to. You aren't allowed to cite overturned cases or bad law to courts, and haven't been allowed for eons. This was true even before the process was automated complety. But now completely automated systems have been around for decades, and one is so popular it caused creation of the word "shepardize" to be used for the task. So this is a double fault on the lawyers part. These systems are integrated well too. Even back in 2006 when I was in law school the system I used published an extension for Microsoft Word that would automatically verify every quote and cite, make sure they were good law and also reformat them into the proper style (there were two major citation styles back then). It has only improved since then. The last sentence is simply a solved problem. The lawyer just didn't do it because they were lazy and committed malpractice.
Incorrect legal information is generally less beneficial than no information at all. A dice roll of correct or incorrect information is potentially even worse.
Lawyers are famously never wrong
Lawyers are famously subject to sanctions and malpractice lawsuits if they, say, don’t read the motions they file on your behalf.
Yes yes, there are many annoying rules in the legal system meant to keep regular people from having any legal power directly.
Lawyers are required to actually cite properly and check their citations are correct, as well as verify they are citing precedent that is still good (ie has not been overturned).
This is known as shepardizing.
This is done automatically without AI and has been for decades.
I don't really see how this is any different from checking for work from another human. If a lawyer tasks another staff to do some research for citations, and the staff made up a bunch of them and the lawyer didn't check, that lawyer would be responsible as well. Just because it's AI and not a person doesn't make it less of an issue.
> AI legal assistants will be incredibly valuable for a large swath of the population
In my experience they're a boon to the other side.
Using AI to help prepare your case for presentation to a lawyer is smart. Using it to actually interact with an adversary's lawyer is very, very dumb. I've seen folks take what should have been a slam-dunk case and turn it into something I recommended a company fight because they were clearly using an AI to write letters, the letters contained categorically false representations, and those lies essentially tanked their credibility in the eyes of the, in one case, arbitrator, in another, the court. (In the first case, they'd have been better off representing themselves.)
There are - we have to accept - alternate solutions as well.
You could, give an example to support your argument as opposed to just telling everyone that you are right.
> using AI is clearly doing more harm than good
How do you know this? Wouldn't we expect the benefits of AI in the legal industry to be way less likely to make the front page of HN?
> He thinks it is unrealistic to expect lawyers to stop using AI
Sure. Its also unrealistic to expect nobody to murder anyone. That's why we invented jail.
His response is absurd. This is no different than having a human associate draft a document for a partner and then the partner shrugging their shoulders when it's riddled with errors because they didn't bother to check it themselves. You're responsible for what goes out in your name as an attorney representing a client. That's literally your job. What AI can help with is precisely this first level of drafting, but that's why it's even more important to have a human supervising and checking the process.
I came here to quote that exact part of the article.
My guess is that he probably doesn't believe that, but that he's smart enough to try to spin it that way.
Since his career should be taking at least a small hit right now, not only for getting caught using ChatGPT, but also for submitting blatant fabrications to the court.
The court and professional groups will be understanding, and want to help him and others improve, but some clients/employers will be less understanding.
The thing is, this statement is doing as much harm to his reputation as the original act, if not more. Who would hire this lawyer after he said something like that?
> We're so helpless in the face of LLM technology that "having some victims, having some damages" (rather than reading what you submit to the court) is the inevitable price of progress in the legal profession?
Same with FSD at Tesla, there's many people who think that accidents and fatalities are "worth it" to get to the goal. And who cares if you, personally, disagree? They're comfortable that the risk to you of being hit by a Tesla that failed to react to you is an acceptable price of "the mission"/goal.
> 21 of 23 citations are fake
This was from the model available in June 2023
I've taken this hallucination issue to heart since the first time this headline occurred, but if you just started with leading LLM's just today, you wouldn't have this issue. I'd say it would be down to like 1 out of 23 at this point.
Definitely keep verifying especially because the models available to you keep changing if you use cloud services, but this September 2025 is not June 2023 anymore and the conversation needs to be much more nuanced.
Frankly I'd argue that something that produces 1 in 23 fake citations may be worse than producing 21 fake citations. It's more likely to make people complacent and more likely to go undetected.
People have more car crashes in areas they know well because they stop paying attention. The same principle applies here.
All citations should have been shepardized. This is standard practice for lawyers for decades. Court rules always require you only cite good law. So you will be excortiated for valid but overturned citations too.
This is actually one of the more infuriating things about all of this. Non-lawyers read this stuff and they’re like oh look it hallucinated some cases and citations. It actually should still have been caught 100% of the time and anyone submitting briefs without verifying their cites is not fit to be a lawyer. It's malpractice, AI or not.
yep, possibly. I’m glad we have a way to see how the situation has improved
A lot of both defeatist and overly optimistic pro AI comments in here. Having built legaltech and interfaced heavily with attorneys over the last 5 years of my career, I will say that there is a wide spectrum of experience, ethics and intelligence in the field. Blindly copying output from anything and submitting it to the court seems like a mind boggling move, it doesn’t really make a difference if it was AI or Google or Bing or Thompson Reuters. This attorney is not representative of the greater population and probably had it coming imho.
There is definitely benefit to using language models correctly in law, but they are different than most users in that their professional reputation is at stake wrt the output being created and the risk of adoption is always going to be greater for them.
Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?
It's not a large step after that to verify that a quote actually exists in the cited document, though I can see how perhaps that was not something that was necessary up to this point.
I have to think the window on this being even slightly viable is going to close quickly. When you ship something to a judge and their copy ends up festooned with "NO REFERENT" symbols it's not going to go well for you.
Lots of hallucination verification tools exist, but legal tech tools usually charge an arm and a leg. This bloke probably used gemini with the prompt "create law"
Part of an issue is that there's already in existence a lot of manual entry and a lot of small/regional courts with a lot of specificity for individual jurisdictions. Unification of standards is a long way away, I mean, tech hasn't even done it
>Do lawyers and judges not by now have software that turns all these citations into hyperlinks into some relevant database? Software that would also flag a citation as not having a referent? Surely this exists and is expensive but in wide usage...?
Why would I pay for software what I could do with my own eyes in 2 minutes?
Two minutes per what? Two minutes per citation is a huge time waste. Two minutes per filing full of citations is unrealistically fast and also adds up.
It's not two minutes per citation and the notion that something that is "very expensive" but otherwise doesn't have a use case unless you're using AI in the first place is a ridiculous proposition. To call what I wrote a "huge time waste" seems to really miss the point. The citations are already hyperlinks into "some relevant database". Attorneys figured that out when they started publishing judicial opinions in reporters (books). I take the citation string and it gives me the decision. Wow!
> Two minutes per filing full of citations is unrealistically fast and also adds up
Also adds up to what? The amount of time it takes to do your job? Vs not doing it and just letting AI rip? Oh I see, people here think that paying another service to fix the poor job the "do your job for you" AI service did makes sense? I don't think so. But then again I'm not the one wondering if lawyers keep cases stored in "relevant databases" out of what could only be sheer ignorance (I bet AI could get that one right). Never heard of Westlaw or LexisNexis?
> The citations are already hyperlinks
Then that's the actual answer to the question they were asking. It's already done before you're given the document.
But I don't know what takes "2 minutes" then? Checking to make sure they actually have hyperlinks?
> But then again I'm not the one wondering if lawyers keep cases stored in "relevant databases" out of what could only be sheer ignorance
That is not what they asked. You only make yourself look bad when you change the question before mocking it.
> otherwise doesn't have a use case unless you're using AI in the first place
Their full post was describing something that has a use case if someone else is using AI.
> The amount of time it takes to do your job? Vs not doing it and just letting AI rip?
No, see previous.
>Then that's the actual answer to the question they were asking. It's already done before you're given the document.
It's not the actual answer. Having a standardized resource location scheme is a solved problem in the legal field for years beyond my knowledge. Well before computers, that's for sure. Getting all the resources cited in any legal brief is probably one of the most trivial task an attorney does, if not a paralegal, and I'm sure there are scripts and apps for it too. You just type the cite in and poof! Out comes the document!
>But I don't know what takes "2 minutes" then? Checking to make sure they actually have hyperlinks?
The task at question. Seeing if "hyperlink" returns a case the citation. It's even more trivial than getting the resource itself. You type it in and don't even have to click the download button. Of course, an attorney actually has to read a case to see if it stands for the proposition they are using it for. But you don't care about that because you're a super genius entertaining a hypothetical.
>That is not what they asked. You only make yourself look bad when you change the question before mocking it.
I'm not the one making myself look bad because I'm not the one entertaining the nonsensical hypothetical of some other internet super genius who knows so much about the legal profession that they've never heard of LexisNexis, WestLaw, the Federal Reporter or the Blue Book.
Legal citations are already a format you can plug into a legal database to get a result, so the idea that it'd be some sort of improvement to see if a citation actually exists when your AI makes up complete bullshit isn't an advancement, it's back to the status quo. Because an attorney needs to actually read the cases they cite to be sure they stand for the proposition they are relying upon. They also need to read the propositions they are representing to the court. So nothing has changed about that. But again, you don't realize that because you have no idea what an attorney needs to do to make their time more valuable. (It's hiring and training junior attorneys.) That's what senior lawyers have been doing forever.
Actual attorneys do the job way better job than AIs. That hasn't changed yet and it doesn't seem like it will anytime soon based upon the AI demos I've been given. The only people who tell me otherwise are posters like you and the hucksters that sell the demos. At least the others are hucksters. You're some poster whose going to argue with me about what will make my job easier, having no idea what I actually spend my time doing. It's always the posters that abstract it into some obtuse operation like "filling out a legal document with facts and relevant law" like they are just two buckets getting sorted. It's so tiresome.
> The task at question. Seeing if "hyperlink" returns a case the citation.
So that's per citation? Then two minutes each is a waste of time for basic checking.
> who knows so much about the legal profession that they've never heard of LexisNexis, WestLaw, the Federal Reporter or the Blue Book.
This accusation is not supported by what they said.
> Legal citations are already a format you can plug into a legal database to get a result, so the idea that it'd be some sort of improvement to see if a citation actually exists when your AI makes up complete bullshit isn't an advancement, it's back to the status quo. Because an attorney needs to actually read the cases they cite to be sure they stand for the proposition they are relying upon.
Again a lot of that comment is about getting a document from someone else and quickly checking validity.
>So that's per citation? Then two minutes each is a waste of time for basic checking.
No, it's not two minutes per citation.
>This accusation is not supported by what they said.
It is.
>Again a lot of that comment is about getting a document from someone else and quickly checking validity.
You have no idea what you're talking about. Have a nice time going off.
I would suggest to you some pondering on the meaning of the rare punctuation "...?". In your zeal to look good by correcting, you have completely misread the question... and gotten called on it. Digging in deeper isn't helping.
It's not me that's not getting it.
Wonder what the State Bar of CA would have to say about this:
https://apps.calbar.ca.gov/attorney/Licensee/Detail/282372
Doesn't seems like there isn't kind of disciplinary action. You can just make up stuff and if you're caught, pay some pocket change (in lawyer money level territory) and move on.
Fines above $1k must be reported to state bar in CA. So they will know about this one.
An OnlyFans case that is ongoing now the plaintiffs attorneys made a filling with i believe entirely fabricated case references:
https://www.courtlistener.com/docket/68990373/nz-v-fenix-int...
The hallucinations in legal briefs get really out of hand when the attorney wants to make an argument not supported by the case law. The LLM wants to do a good job defending a case, so it invents the legal precident, because otherwise it'd be impossible to make the argument credibly. This invites a rule 11 challenge from the other side where you claim the lawyer is so full of crap with his claim that he deserves sanction for not understanding the law and wasting everyone's time.
What's interesting about the rules of civil procedure is that it has been built up over centuries to prevent all kinds of abuse by sneaky, clever, unscrupulous litigants. Most systems are not so hardened against bad faith actors like the legal system is and AI just thinks it can pathologically lie its way through cause most people trust somebody who sounds authoritative.
I just read the initial complaint, what do you think about that case? Is there a community that wants disclosure of "chatter's" existence? It seems to be going the other way with AI personalities doing the chatting
> In recent weeks, she’s documented three instances of judges citing fake legal authority in their decisions.
So lawyers use it, judges use it ... have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?
I know a lawyer who almost took a job in state government where one of the primary duties was to make sure that the punctuation in the bills going through the state legislature were correct and accurate. For her, part of the appeal of the job, was that it would allow her to subtly alter the meaning of a bill being presented. Apparently it is a non trivial skill to be able to determine how judges are likely to rule on cases due to say, the presence, or the absence of an oxford comma.
There was an entire team dedicated to this work, and the hours were insane when the legislature was in session. She ended up not taking the job because of the downsides associated with moving to the capital, so I don't know more about the job. I'd be curious how much AI has changed what that team does now. Certainly, they still would want to meticulously look at every character, but it is certainly possible that AI has gotten better at analyzing the "average" ruling, which might make the job a little easier. What I know about law though, is that it's often defined by the non average ruling, that there's sort of a fractal nature to it, and it's the unusual cases that often forever shape future interpretations of a given law. Unusual scenarios are something that LLMs generally struggle with, and add to that the need to creatively come up with scenarios that might further distort the bill, and I'd expect LLMs to be patently bad at creating laws. So while, I have no doubt that legislators (and lobbyists) are using AI to draft bills, I am positive that there is still a lot of work that goes into refining bills, and we're probably not seeing straight vibe drafting.
Here's a fairly recent example of a $5M lawsuit that hinged on the interpretation of an Oxford comma in a Maine law about overtime pay: https://www.fedbar.org/wp-content/uploads/2018/10/Commentary...
> have we seen evidence of lawmakers submitting AI-generated language in bills or amendments?
MPs are definitely using AI to write their speeches in parliament: https://www.telegraph.co.uk/business/2025/09/11/chatgpt-trig...
>> lawmakers submitting AI-generated language in bills or amendments?
Most people would be shocked to find the majority of bills are simply copycat bills or written by lobbyists.
https://goodparty.org/blog/article/who-actually-writes-congr...
Bank lobbyists, for example, authored 70 of the 85 lines in a Congressional bill that was designed to lessen banking regulations – essentially making their industry wealthier and more powerful. Our elected officials are quite literally, with no exaggeration, letting massive special interests write in the actual language of these bills in order to further enrich and empower themselves… because they are too lazy or disinterested in the actual work of lawmaking themselves.
a two-year investigation by USA Today, The Arizona Republic, and The Center for Public Integrity found widespread use of “copycat bills” at both federal and state levels. Copycat legislation is the phenomenon in which lawmakers introduce bills that contain identical langauge and phrases to “model bills” that are drafted by corporations and special interests for lobbying purposes. In other words, these lawmakers essentially copy-pasted the exact words that lobbyists sent them.
From 2011 to 2019, this investigation found over 10,000 copycat bills that lifted entire passages directly from well-funded lobbyists and corporations. 2,100 of these copycat bills were signed into law all across the country. And more often than not, these copycat bills contain provisions specifically designed to enrich or protect the corporations that wrote the initial drafts
I mean, we've seen laws that were written by lobbyists with zero changes. Does it matter if it was AI generated or not at that point? The congress critters are not rewriting what they've been told to do if they've even read it after being told what to do.
This is why there are certain jobs AI can never take: we are wired for humans to be responsible. Even though a pilot can do a lot of his work via autopilot, we need a human to be accountable. For the pilot, that means sitting in the plane. But there are plenty of other jobs, mostly high-earning experts, where we need to be able to place responsibility on a person. For those jobs, the upside is that the tool will still be available for the expert to use and capture the benefits from.
This lawyer fabricating his filings is going to be among the first in a bunch of related stories: devs who check in code they don't understand, doctors diagnosing people without looking, scientists skipping their experiments, and more.
> This is why there are certain jobs AI can never take
You're thinking too linearly imo. Your examples are where AI will "take", just perhaps not entirely replace.
Ie if liability is the only thing stopping them from being replaced, what's stopping them from simply assuming more liability? Why can't one lawyer assume the liability of ten lawyers?
Then there will still be lawyers. More productive, higher income lawyers.
Just like with a lot of other jobs that got more productive.
People who think like this cannot be convinced; they're unaware of the acceleration of the rate of progress, and it won't change until they clash with reality. Don't waste your time and energy trying to convince them.
They don't understand how to calibrate their model of the world with the shape of future changes.
The gap between people who've been paying attention and those who haven't is going to increase, and the difficulty in explaining what's coming is going to keep rising, because humans don't do well with nonlinearities.
The robots are here. The AI is here. The future is now, it's just not evenly distributed, and by the time you've finished arguing or explaining to someone what's coming, it'll have already passed, and something even weirder will be hurtling towards us even faster than whatever they just integrated.
Sometime in the near future, there won't be much for people to do but stand by in befuddled amazement and hope the people who set this all in motion knew what they were doing (because if we're not doing that, we're all toast anyway.)
The book https://en.wikipedia.org/wiki/The_Unaccountability_Machine introduces the term "accountability sink", which is very useful for these discussions. Increasingly complicated systems generate these voids, where ultimately no human can be singled out or held responsible.
AI offers an incredible caveat emptor tradeoff: you can get a lot more done more quickly, so long as you don't care about the quality of the work, and cannot hold anyone responsible for that quality.
Where could lawyers be learning this behavior?
https://www.theguardian.com/us-news/2025/apr/24/california-b...
There you have it --- proof that lawyers love AI --- in more ways than this one example illustrates.
Using a tool that is widely known to be flawed to provide any sort of professional service (legal, medical, accounting, engineering, banking, etc.) is pretty much a textbook definition of negligence.
And lawyers just love negligence.
$10,000? That's a slap on the wrist. I don't say this lightly, this should have been jail time for someone. You're making a mockery of our most sacred institutions.
for first offenses for something like this you'd suggest jail time? hope you find excuses to skip your next jury summons. however, it is typical in jury selection to be asked by the defense if you'd be able to agree with a minimum sentence while the prosecutors like to ask if you'd be able to agree to the maximum. personally was asked if I could agree to 99 years for someone's first offense of GTA. I said no, and was dismissed. Sounds like you'd have said yes.
For someone who had to attend 6+ years of school and had to pass a professional licensing exam with ethics questions? Yes, I do. $10,000 is one week of billable hours at $250/hr.
Do you think a Civil Engineer (PE) should be held liable if they vibe engineered a bridge using an LLM without reviewing the output? For this hypothetical, let’s assume an inspector caught the issue before the bridge was in use, but it would’ve collapsed had the inspector not noticed.
No single civil engineer designs a bridge though now do they? So the premise of your retort is just way off here. No bridge plan is made without reviews after one person presses print on the plotter. Even the construction company hired to build the bridge will review the plans before they break ground. If someone is building a bridge on their private property and hires their nephew, that's on them. An actual civil project, nope, I reject your premise outright.
I would wager that plenty of bridges are designed by a single engineer, most bridges are not massive 8-lane highway bridges but small bridges in municipalities.
A single person can design a building, why not a bridge?
P.S. I sell and run commercial construction work
So you’re saying when you run those jobs or the team you sell the job to have no one that’s ever built anything before to see plans and ask questions? So if I accept the premise that a single person designed a bridge that that’s all that would be done to ensure it meets the specifications? You’re saying that nobody would ever review the plans? Nobody would say the bolts being used are too small for purpose, or any number of things that could pop up? The concrete pads are insufficient? The steel I-beams are too thin? Someone would just take the plans exactly as listed, purchase the material as listed, and not one question ever would be raised? I would never trust a construction team that didn’t raise questions if not even to see if they themselves could skimp on material to pocket the difference.
You raise a number of good points, my example wasn’t as strong as I thought. I was attempting to contrive a scenario that was similar to a lawyer using an LLM and not reviewing the output, the civil engineering example isn’t a great due to the issues you raised.
> Someone would just take the plans exactly as listed, purchase the material as listed, and not one question ever would be raised? I would never trust a construction team that didn’t raise questions if not even to see if they themselves could skimp on material to pocket the difference.
You’re right, the contractor would likely catch the design issues if there were any, and possibly before that in the plan review/permitting process if the AHJ is on the ball.
I work in the electrical trade and I (and my electricians) find and correct errors frequently in engineered plans. We tell the engineer if it costs us more money to attempt to get a contract change order, but we keep it to ourselves if we can do it safely for cheaper. A common scenario I run into is a design with oversized feeders where you can use a smaller wire and still meet code, we just pull the smaller conductors and pocket the difference (assuming you bid the project using the larger wire size)
>jail time
Surely it would suffice to eject him from the California bar -- or suspend him from it for a time.
Yeah, I would revoke his license and ban him for a few years.
After that allow him to retake all the exams to get licensed again if he wants to.
Jail time? Thats a slap on the wrist. lets summarily execute him and his extended family two generations up and down
What we need is a return to the good old days of the Nine Familial Exterminations
Your response seems a bit over the top especially considering it is a civil case
Describing the legal system as sacred is surprising to me, it's not what I would think about it.
What do you understand sacred to be, and why would you include the legal system in that category?
Sacred: regarded with great respect and reverence by a particular religion, group, or individual.
There's something grimly hilarious about knee-jerk demands for jail time for [other profession] for using AI, when a bunch of us here are eagerly adopting it into our own workflows as fast as we can.
Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
I don't think you meant it this way, but it feels like a frank admission that what we do has no value, and so compared to other people who have to be correct, it's fine for us to slather on the slop.
The jail time wouldn't be for using AI. It would be for submitting a document to the court that would have gotten an F in any law school.
Sort of like recklessly vibe coding and pushing to prod. The cardinal rule with AI is that we should all be free to use it, but we're still equally responsible for the output we produce, regardless of the tooling we use to get there. I think that applies equally across professions.
> Why jail time for lawyers who use Chat-GPT, but not programmers? Are we that unimportant compared to the actual useful members of society, whose work actually has to be held to standards?
Programmers generally don't need a degree or license to work. Anyone can become a programmer after a few weeks of work. There are no exams to pass unlike doctors or lawyers.
All the more reason to have insanely harsh punishments!
In absence of mitigations like laws and exams, it makes more important to use criminal and civil law to punish bad programmers.
What about if the programmer job was offshored?
relevance?
Will the same criminal and civil laws be used if the software was developed by programmers in another country?
[flagged]
> when a bunch of us here are eagerly adopting it into our own workflows as fast as we can.
speak for yourself. some of us are ready to retire and/or looking for parts of the field where code generation is verboten, for various reasons.
LLMs are just not the technology for 'search' and cite references, no matter how 'agentic' you make it.
They are great for a large number of other things, like generating code for example.
We are so focused trying to shoehorn LLMs to do something it is fundamentally unsuitable for that we are probably missing the discovery of the technology that would solve this.
Everyone is somewhat missing the point here that the California bar is making.
They don't care if you use an AI or a Llama spitting at a board of letters to assemble your case, you are responsible for what you submit to the court.
This is just a bad lawyer who probably didn't check their work in many other cases, and AI just enabled that bad behavior further.
The American Bar Association, as well as every state bar association, published guidance on GenAI usage at least a year ago. The existing legal and ethical responsibilities go beyond just being responsible for hallucinations. Client information privacy, accurately tracking time and billing only for time actually spent on the case, etc.
As time goes on, it becomes less and less defensible to see this stuff.
This. Lawyers can use AI tools. But an attorney is ultimately responsible for everything that goes out the door. So they had better check the output from their AI tool carefully. In most firms (at least most good ones) someone would check every citation that goes out the door, even if it was written by an experienced attorney, so it speaks volumes that someone would fail to check citations generated by an LLM.
$10k is a slap on the wrist. He should be disbarred.
> He thinks it is unrealistic to expect lawyers to stop using AI. It’s become an important tool just as online databases largely replaced law libraries and, until AI systems stop hallucinating fake information, he suggests lawyers who use AI to proceed with caution.
I think this is a good reason for fines to not be incredibly big. People are using AI all the time. There will be a growing period until they learn of its limitations.
The slope of the learning curve can be adjusted though with the level of fines being a big adjustment. Only I'd suggest not the first time for any one lawyer per firm, but per incident for each firm.
[dead]
I'm numb to it after many "EU fines Householdnamecorp a zillion doubloons" type headlines, but using "historic fine" to describe $10k to a lawyer feels odd.
> The fine appears to be the largest issued over AI fabrications by a California court
This is a bit like all the stats like "this is appears to be an unprecedented majority in the last 10 years in a Vermont county starting with G for elections held on the 4th when no candidate is from Oklahoma".
Lots of things are historic but that doesn't necessarily mean they're impressive overall. More interesting is how many of these cases have already been tried such that this isn't "historic" for being the first one decided.
Like a lot of sports statistics. "Most home runs hit by a player older than 35 on Tuesdays in years with an even number"
expecting the same level of fine to an individual person as opposed to a faceless corp really shows how numb you must be. for an attorney to be fined that much is not normal. TFA even shows example of higher fines issued to law firms, while still not as high as your zillion doubloons hyberbole it still shows the distinction between individual and s/corporation/law firm/. EU fines have been progressively getting higher especially for repeat offenders. it would be unwise to expect different in legal matters
His website makes him look like the owner of a law firm, although I think it's just him? I'm not expecting the same number, but... california issues bigger fines for watering lawns or buying illegal fireworks. For a lawyer, a fine order of magnitude smaller than "hiring a paralegal" is less "historic" and more "cost of doing business, don't get caught"
California issues higher fines for littering than for abandoning an animal on the highway. It's listed right there on the highway signs as you enter the state. It stood out when I saw it for the first time.
Fines are arbitrary numbers set by some people not necessarily knowing about other fines for other offenses.
I don't think it's historic because of the amount of fine, it is historic because of the precedent it sets about the use of AI in legal documents.
yeah, is $10,000 a lot of money to a lawyer?
Probably a lot for a lawyer that can't afford a paralegal.
The legal system is not an API you can spam. You must have credentials that involve passing a very tough exam to even submit to it and going to school for 3 years. If you make a submission that is not fully syntax and reference checked you can face a $10,000 fine and suspension of your license to submit to it. You do not waste the legal system's time with AI slop.
I have worked in litigation and if you submit some litigation to a court clerk that is not perfect you won't even get a reply that it's no good. They'll just throw your brief in the trash without looking at it and never call you back.
$10k is not a high enough fine to dissuade this behavior. Large law firms will just approach this the same way that big tech accepts GDPR fines as part of the cost of doing business.
Does anyone have a source of what the citations were?
I didn't see them mentioned in the article.
The article links to the opinion[1], which notes more than once that "the quoted language does not appear anywhere in the opinion," and "Goldstine appears to be a fabricated case." I don't know whether it's easy to get a copy of the complaint in question.
[1] https://www4.courts.ca.gov/opinions/documents/B331918.PDF
$10,000, wow what a shocker! I don't know how anyone who can afford to live in California could ever expect to pay such a fine. I expect the lawyer will soon have to declare bankruptcy.
It the first of its kind and I would wager is more of a warning shot. 10k on a case may not be much but tens of thousands multiplied by hundreds of cases won't be negligible for a lawyer or a small firm. Not to mention the prospect of loss of case and reputation and consequently future business due to such actions.
Most lawyers don't make great income. This could be substantial for him if he has to pay it personally.
I searched for the lawyer's name in the state bar association, they've been practicing for over 13 years. Even has electrical engineering in their background.
And?
$10,000 is nothing. Should be $200,000+.
It's fun to point and laugh in this scenario where the attorney just threw slop at the court. However, these stories won't be around long.
Tools like Lexis+ from LexisNexis[1] now offer grounding, so it won't be as simple to bust people cutting corners in the future because these prevent the hallucinations.
We're now closer to the real Butlerian Jihad when we see pro se plaintiffs/defendants winning cases regularly.
[1] https://www.lexisnexis.com/blogs/en-au/insights/hallucinatio...
Doesn't even feel controversial. LLMs hallucinate and in law that's not acceptable. Increase the fine though to really punish those doing this.
Hopefully the fines grow exponentially for repeat offenders. Seems like now, lawyers can use AI to DDOS the other counsel.
The fact that they interviewed someone named "Charlotin" for this article is a coincidence I can't ignore
This is just incredibly defeatist on everyone talking here.
Here we have irrefutable evidence of just how bad the result of using AI would be and yet... the response is just that we need to accept that there is going to be damage but keep using it?!?
This isn't a tech company that "needs" to keep pushing AI because investors seem to think it is the only path of the future. There is absolutely zero reason to keep trying to shoehorn this tech in places it clearly doesn't belong.
We don't need to accept anything here. Just don't use it... why is that such a hard concept.
> Here we have irrefutable evidence of just how bad the result of using AI would be
It's not just the result of using AI, it's the result of failing to vet the information he was providing the court. The same thing could've happened if he hired a paralegal from Fiverr to write his pleadings and didn't check their work.
It's like saying that because he typed it on a computer, it's the computers that are the problem, and we shouldn't keep using them.
We're already at least a year past AI tools having the ability to perform grounding (Lexis+ from LexisNexis, as I cited on another comment in this post, for example), so this whole fiasco is already something from a bygone era.
It's not (just) defeatist, it's fatalistic. And I agree. There can be cultural underpinnings here, too--the attorney is CA-based, and though LA is distinct from SF, I wonder if there isn't a thread of the "move fast and break things" ethos showing up as well.
Whenever an "AI" article is posted here the comments are heavily astroturfed.
I'm curious what part makes you think "everyone" is endorsing continued use by lawyers to write briefings.
For so many people, option A is chatgpt lawyer and option B is no lawyer. When the hourly billing of a lawyer approaches the weekly pay of a worker, something’s gotta give
Have LLMs resulted in a democratization of law where anyone can now afford to hire a lawyer? As far as I know, the answer is no. Lawyers who use unreliable tools to generate fake citations are still charging just as much.
I'm a lawyer and I do not use AI. I was given a product test of an AI legal solution and it was terrible.
The fact that the lawyer is unrepentant and says that it is simply too convenient and "there will be some victims, tough shit", basically, means the fine was a zero or two too short. Deterrence needs to hurt!
AI constantly fabricates references. I found a question recently (accidentally) which would make it fabricate every answer. Otherwise, it tends to only fabricate about a third of them, and somehow miss dozens of others that it should easily find.
After asking for recommendations, I always immediately ask it if any are hallucinations. It then tells me a bunch of them are, then goes "Would you like more information about how LLMs "hallucinate," and for us to come up with an architecture that could reduce or eliminate the problem?" No, fake dude, I just want real books instead of imaginary ones, not to hear about your problems.
detail: The question was to find a book that examined how the work of a short order cook is done in detail, or any book with a section covering this. I started the question by mentioning that I already had Fast Foods and Short Order Cooking(1984) by Popper et al. and that was the best I had found so far.
It gave me about a half dozen great hallucinations. You can try the question and see how it works for you. They're so dumb. Our economy is screwed.
free bro