What happened:
In the 2023 Mata v. Avianca court case, two lawyers used AI to help write a legal brief.
It gave them case citations.
They looked real.
They sounded real.
So they submitted them to the court.
The problem:
None of the cases existed.
The court later sanctioned the attorneys for submitting citations generated by AI that they had not verified.
What I see:
This wasn’t just an AI error.
It was a judgment failure.
They didn’t just use AI…
They trusted it without verifying it.
They let the output replace their responsibility.
Why it matters:
This is how dependency forms.
Not from bad tools, but from outputs that appear credible and go unverified.
When AI is used in place of judgment:
verification drops
risk increases
Where do you still double-check AI, and where have you stopped?