LLMs in Legal Education: A Conversation with Maya
Maya is a former student of mine, now in her third year of law school. This conversation took place over Zoom while she was driving through Virginia, heading home from law school.
On AI and Evidence in Future Legal Practice
Maya: The real battle is gonna come in the future when you have to authenticate whatever evidence it is you’re admitting into court. With photographs and videos, the question’s gonna become: is someone’s testimony that this is the real thing enough to authenticate it? Because you’re not going to be able to tell the difference between things that were actually recorded by someone, or a video that was generated by an AI system.
The law, in a way, has to change. It’s gonna become a battle that they don’t even foresee, or they don’t want to foresee. LLMs and AI models are going to take over, for sure, even in the legal field. There’s already lawyers fucking up and citing cases that don’t exist.
The general rule is that to authenticate a photo or a video, the person just needs to testify. They didn’t have to take the photo, they didn’t have to take the video, they just have to testify either they were there and witnessed the events in person, or they were there and saw the video. That’s enough to authenticate to get into a courtroom. I don’t even think they’re thinking about, okay, but what if it was AI-generated? No one’s asking that question.
Me: If you were a defense lawyer, would you bring that up? Would you say, how can you prove this isn’t AI-generated?
Maya: I don’t even know if there’s case law on this. I could object to it, and I could question it, and I could bring authenticity into question, but what are they gonna do? How do they prove? How do you prove? Should it be the state’s burden, or should it be the defendant’s? Should it be the person trying to admit it? I don’t know.
On Career Aspirations and the Criminal Justice System
Maya: I always thought I wanted to do criminal defense. Part of me still wants to do criminal defense. But I really want to work with children. I think I want to work in the child welfare system.
I did the criminal defense clinic. Yes, I do think defendants deserve a right to representation. Yes, I probably will become a criminal defense attorney at some point in my life. I kind of want to prosecute. I think I would be good at prosecuting, especially with the defense training that I’ve had. I think I’d be a fair prosecutor, but holy fuck do I want to prosecute some people and throw them away.
When I was working for the defense attorney, a lot of the cases that we knew were going to trial, I would have to find the case law, and I would make the arguments. The arguments that I came up with weren’t because I believed in them—they were because I knew there were loopholes. I could find a technicality, I could find case law that was good for us, I could make the circumstances look good for us. I knew otherwise. That’s really hard for me. I believe they deserve representation. I can find the case law that you need, but I don’t think I want to.
On First Encounters with ChatGPT
Me: Do you have a memory of the first time that you used ChatGPT?
Maya: I don’t remember the very first time, but I can tell you it was during law school, and it was for something stupid. My roommate at the time was heavily into ChatGPT before I was, so she was always saying, “Well, let’s just ask ChatGPT.”
Ever since I got that response, I was like, okay, you know what? This was a lot quicker than Googling.
On Academic Integration of AI
Maya: It’s so interesting. It was kind of unspoken at first. Which is not the way things go in law school—things are usually spoken about. But one day, it just kind of got integrated, literally into our syllabus.
We were trying to figure out if it was against the honor code. We pulled up the syllabus, and it said “AI tools like ChatGPT is not permitted to use on this assignment.” So we looked at each other, and we were like, “Okay, but it says you can talk to others about the assignment, so what if we talk to ChatGPT? We just don’t get it to write the assignment.” That’s a loophole.
Then there was another professor who just randomly said one day, “If you want to use ChatGPT, I don’t care, go ahead. I will be able to tell who knows their work and who doesn’t. It doesn’t matter if you use ChatGPT or not.” And I was like, he’s right!
At the end of the semester, this professor sent out a survey to our entire year asking about ChatGPT usage—how many assignments, how many hours a week. It wasn’t a thing, and then all of a sudden, it was a thing. But no one really said anything about it. I feel like they almost didn’t want to break it down to the people who weren’t using it.
On Writing and AI Collaboration
Me: You said you wouldn’t use it to write. Why not?
Maya: Not to sound cocky, but I think I can do better. In the sense of, I know who my audience is, and I know exactly what they’re looking for. Sometimes it has gaps in its responses.
When I do use it for writing, I give it extremely specific, detailed instructions, worded specifically, with an outcome that’s specific. If it’s missing things, I will go back and tell it to use open sources to fill in what it’s missing. If there are things that are wrong that I know are wrong, I will tell it that everything else is great, but these are wrong, and it needs to go fact-check.
Then it gives me a complete response, and then I will go edit it, because I know what my audience is looking for, and I know the changes that need to be made. I can make them faster with a better understanding than it’s going to do. I never just use the writing that it gives me.
On Research and Fact-Checking
Maya: In law, cases are kept on databases—LexisNexis and Westlaw. I use Lexis for case searches. I’m very good at researching. But how I’ve become even more efficient is Lexis now has an AI tool. You can speak to its AI, tell it what you’re looking for, change the broadness of your scope, look for a specific jurisdiction. It’ll pull the cases for you and give you summaries.
So I use AI to check my AI. I’ll get my stuff from Lexis AI, look at the cases, download them, send them to ChatGPT, and be like, “This is what Lexis AI said to me, here are the cases it’s referencing. Analyze everything, give me the holding, give me the rule. Fact-check Lexis AI, because sometimes it gets it wrong.”
The difference with these lawyers who are submitting things in court without fact-checking cases—I always go fact-check the cases. I always go make sure that the holding is what it’s telling me it is.
On AI Hallucinations and Trust
Maya: There was one time where I asked it to find a case for a forensic evidence assignment. I was tired of doing the Daubert standard—I’ve written on it so many times. So I told it what outcome I was looking for, what type of evidence I wanted to analyze. I wanted to do polygraph testing.
It gives me this fantastic case! I’m like, great! I googled it and saw the case name, so I’m like, great, it exists. It didn’t exist. I misread the case name. It had part of the case name that was the same, the rest of it wasn’t. Everything that it said in the case was made up.
I only figured this out after I’d completed the assignment. I was like, let me just skim the case before I submit this, because I can’t be one of those people that submits a fake case.
I went back to ChatGPT, and I was really pissed. I was like, “This doesn’t exist.” It essentially said the model is supposed to be creative, so it took a concept that does exist and made it into something that didn’t exist. I was like, “Can you save in your memory that I never want you to ever make up anything that doesn’t exist?”
Ever since then, I always, always, always fact-check. Either myself or I make it fact-check itself and supply me the sources. I think if people use it in the right way, it can generate good products. It doesn’t have to be a thing that everyone’s afraid of or doesn’t trust.
On Creative Uses and Personal Applications
Maya: I use ChatGPT for creative writing. If I have a particular thing that I want to talk about or write about, or that I really just want to see—I want it generated from my perspective, but I don’t necessarily want to generate it right now, but I want to receive the product. I want to see it, and I want to feel what I would have felt if I had generated it.
I have it generate things for me, like poems, or stories, or short excerpts from a certain perspective. I tell it to tweak things, and I tell it to continue from a different perspective, and then I add all of it together.
I use it in so many different ways. Sometimes when I have multiple choice assignments, I’ll upload it and ask what the answer is, and then if I think it’s wrong, I’ll challenge it. I’ll be like, “Well, this is why I think you’re wrong.” Then it’ll challenge me back. Once I banter with it, it usually comes out with the correct response.
On Confidence and Validation
Me: Do you feel more confident as a human being?
Maya: I do! I almost feel validated. I feel more confident in the arguments that I’m giving and the things that I’m saying and the way that I’m saying them, because I’ve already been through it. I’ve already bounced the ideas off of something. I’ve already gone through all the mind spins that you’d go through.
I’ve already been through all of the insecurities that I would have. They’ve already been proven otherwise, so now when I go out and I present those things, I feel like I’ve already fact-checked everything. I feel more complete. I feel like you can’t really poke a hole in me. I’ve already been through more avenues than you could probably think of.
On Teaching and Academic Integrity
Me: If you were an English teacher and had the opportunity to use ChatGPT and teach your students how to use it, what would you have them use it for?
Maya: I don’t think there’s a point in telling students you can’t use it, because that just makes them want to use it more. But secondly, I do think they should be able to use it.
You can very clearly tell when someone doesn’t know how to use it, didn’t spend the time, literally just uploaded the assignment and said, "Generate it for me." You can look at it and be like, that is AI-written.
I think there are ways that it can enhance students. You can talk to it about the assignment. You can ask it what it thinks. You can upload a part that you’ve written and ask it if it thinks you’re heading in the right direction, if it has any critiques, if you should expand the area that you’re looking at.
I think it can literally be a mind that helps you get through the assignment. In the end, arguably, the product is still yours, because you prompted it so much in such a specific direction that even if you did use it, and you didn’t edit its response, it’s unlikely that another student’s gonna have the same response as you.
I think that’s a good way for students to use it, because I think it’s going that direction anyway, so you might as well train them how to use it properly, so that they don’t go to court and submit documents with fake cases.
On the Uncanny Valley and Voice Features
Me: Have you used the voice feature?
Maya: No, it unnecessarily scares me. I don’t know why. I’m just creeped out by those AI voices that don’t sound human. It just gives me that phenomenon where things are so close to human-like, but they’re not human-like, and it’s freaky.
Me: Uncanny Valley.
Maya: That one.
On Academic Resistance to AI
Me: In my world, there are very few creative writers I know who are using this in any way, which is fucking mind-blowing to me, because we have a robot that we can talk to and ask it to do all kinds of tricks and perform all kinds of linguistic wonders.
Maya: Exactly! Someone is not going to use ChatGPT because of the fact that there are people in Nigeria being paid pennies on the hour to flag offensive internet content. But there’s invisible labor everywhere. Capitalism depends on invisible labor and the oppression of people globally for literally every product that we use.
Anything that you can raise about ChatGPT, that problem is already elsewhere in the world, and it wasn’t a problem when it wasn’t ChatGPT. Someone posted on Instagram saying, “If you care about the environment, you won’t use AI.” And I was like, you’re posting this on Instagram! Whose data servers also require water to cool them. Even a Google search is the same thing.
On the Future of AI in Education
Maya: I literally use it for every little thing that I don’t understand. In a way, I’m like, is it hindering my thinking process? But it doesn’t feel like it. That thinking process is there. I get to the same wall I would have gotten to before. It really just helps lay out things in simple terms when you need it to.
It’s like literally having a second mind. You don’t have to rely on it, but it’s a second mind that you can bounce off of and get pointed in directions that you wouldn’t necessarily have gone.
There is no way to correctly identify if something’s AI-written. They just can’t do it. They don’t have the technology for it. So even if they suspect that your assignment’s AI-written, unless it’s the exact same as someone else’s, they can try to get you, but there’s no way for them to tell.
This conversation reveals the complex reality of AI integration in legal education—where students are navigating uncharted territory, developing sophisticated strategies for collaboration with AI tools, while institutions struggle to establish clear policies. Maya’s experience suggests that rather than prohibition, we need frameworks for responsible and effective AI partnership.