In this episode, we interview Mark Williams, Assistant Professor of Law and co-founder of Vanderbilt Law School's AI Law Lab. Mark discusses what inspired the creation of the AI law lab, its goals and objectives, partnerships and projects, the role of AI in legal services and access to justice, future roles of legal professionals with AI integration, and more.
Key discussion points:
James Pittman: Welcome to immigration uncovered, the Docketwise video podcast, where we dive deep into the dynamic world of immigration law with the latest developments, cutting edge practice management strategies, and the transformative impact of legal technology. I'm James Bittman, and today I have with me Mark Williams from Vanderbilt University Law School and the Vanderbilt AI Law Lab, Vale. Mark, welcome. Thanks for joining us.
Mark Williams: Thank you very much for having me.
James Pittman: All right, Mark, well, let's just start off by. Let me just ask you, what inspired the creation of the Vanderbilt AI law lab?
Mark Williams: Yeah, it was about this time last year that the seeds were really in place. So I, for about ten years, have been teaching a course called law practice, technology, or some version of it. And AI had always been a component of it, because AI has been a part of the way we operate online in society and in law, whether we knew it or not. But something different was about this time last year when Chat GPT came out, November 2022, it came back. The students were back. They'd clearly played with it for that time from when they left after exams and went home and then came back to start the new semester. And as I mentioned, I've always had an AI component in my course. But something was different when I started this course. This time last year, the students were different. Their questions were different. We all understood that whatever, there's something about that chat interface, when Chat GPT came out, that people reacted to it differently. And especially when we're in law and our stock and trade is words, and we have been presented with essentially, what's a calculator for words? We had to contend with this in some way, and our students were talking about it. And whether I liked it or not, in that moment, I was teaching more of an AI centric course. We had to pivot. It couldn't just be a unit anymore. It had to be a through line, through everything that we talked about. And that's kind of how we proceeded. So that's where the light bulb went on for me. And then our dean, Chris Guthrie, had also, around that time, been thinking about this, and I think had probably seen. He had seen or had been told about what we now know as either RB or case text co counsel, an early version of somebody was talking to him that he sits on for that mentioned that there's this database. It's built on a large language model. It's doing this quality of work. And so he had approached myself and Kat and other stakeholders here at law schools. What resources do we have? What can we do to be proactive with thinking about this technology so that we can emerge as a school that's sort of before thinking about the implications of this for both students and for legal work from there. So those are really where the seeds of it sort of happened over the spring of 2023, and working groups on through the summer towards finally what we have a launch. We also at Vanderbilt. Part of how this came together is if you look at Vanderbilt and some of the generative AI, or if it's being done university wide, we are truly a part of a larger consortium of people that we're tying in with that have more computer science backgrounds and expertise, that we have been able to leverage their partnerships as well to help us bring this lab to fruition. But the scenes were really planted there.
James Pittman: And you mentioned Kat moon. That's Caitlin Moon. She's your co founder at the lab.
James Pittman: I just want to let everyone know, and I had read in one of the articles about you, that the lab was somewhat inspired by the news that Chad GPT passed the bar exam. So let's talk specifically about that. What did that specific event mean in your consciousness?
Mark Williams: It certainly helped explain, yes, and mean the fact that it passed the test in and of itself was not if then we will create the lab. But that leap. And I still find when I talk to attorneys, that this is part of the mission of the lab in this early days, is people don't understand the leap in power that GPT four represented from the previous iteration of just going to be the best frontier model. If you haven't interacted with that model and you're a user of generative AI, you haven't experienced the full capabilities of what these tools can do. And so, yeah, that was a light bulb moment, because you can go see that paper, you can go look it up online. GPT four passes the bar exam, and it goes from being a bottom 10% test taker with Chat GPT to a top 10% in the course of a couple of months. So that's pretty profound. And you can see the answers for yourself in the appendix of that paper, because it's not just the multiple choice, it's the essay portion. There's no hiding the ball. You can go evaluate for yourself what that is. And what is interesting to me, too, is how the ball kind of gets moved. And the things that we've gotten used to over the last year and how the ball can sometimes get moved is that when Chat GPT came out and it could do certain things for a while, it could answer a legal question or two, but you trod it for a little while. It kind of falls apart after a while. Right from that time, from November to March, before GPT four comes out, we said, well, it could never pass the bar exam, or it could never do this, or it could never do that. Well, then it does it. So then we moved the goalpost back further. It's like, well, it can't do this. Well, it can't do that. It's amazing what we've gotten used to over the course of the last twelve to 13 months. And certainly that moment represented something. Now, whether it's true thinking or reasoning, that's clearly not the case. But it did something right, something that we hadn't seen before, and it was impressive. So that is something we had to continue.
James Pittman: And it certainly sounds like you've got Dean Chris Guthrie on. You know, as one of the co founders of the lab, could you share the specific goals and objectives that the lab is currently aiming at in terms of harnessing AI for legal services and knowledge? And how do you see AI currently intersecting with the delivery of legal services and especially the access to justice angle?
Mark Williams: Right. I think right now in this current mob, we have a suite of courses that we offer and we're teaching one right now called AI and legal practice. And then I mentioned my legal practice technology course earlier. A lot of what we're focused on in this immediate moment as we're building out the long term term goals of the lab, is just to help equip and create a generation of students who are both well versed in the nuts and bolts of the technology itself. So how it works, so things as basic as just prompting techniques to ethical considerations to how to avoid being the Chat GPT lawyer, the Abiaca brief. Right? So we spent, even yesterday, we spent a lot of time talking with that of our students, which is just really very basic education. If you understand at a very basic level, you don't have to be a machine learning PhD to understand that these large language models, by just trying to predict the next word based off the previous outputs you give it, of course it's going to try to please you, and it may produce a fake case citation because it's just trying to make a statistical prediction of the next words that you would like to see based on the test that you give. So it becomes much less surprising then that it may output a fake case if it hasn't seen the trading data previously for the types of case that you're asking it to produce. So just sort of coming up with those use cases and rounding our students, in that reality of the technology, that is a big focus just right now. I think long term, that will subside, and we'll move into more ethical implications. What does it mean for the nature of legal work? If certain tasks that we used to do are now more automatable, what then do we do with our time as attorneys? Those are the longer term focus sides. That's really the immediate goal right now, is to create a student that understands everything from that very basic nuts and bolts, prompting all the way out towards more policy and access and justice implications. And then on the access to justice piece, we do have active partnerships with. Well, right now, Professor Rivocat has a separate course that's also part of the concentration we're developing. It's a course called legal problem solving. And that course will partner with the Legal Aid Society of North Carolina to come up with access to justice based solutions using generative AI. We also are partnering with computer science students here at Vanderbilt, so they will kind of work as teams together to create solutions. This end result solution doesn't necessarily have to be generative AI, but it will be involved in the process. It could or it couldn't. It uses sort of human centered design process to iterate an end solution. So we don't necessarily know what that end product is going to look like. But it is fascinating just from. That's always been an inspiration for the lab, either when we talk to our clinical folks here at the law school, to legal aid clinics that are external to the law school, even if it's not legal work itself, just the intake, the process, the business of law, those things that generative AI can help with were immediately evident to people who work in those fields. And so we certainly, a big part of the impetus for the lab is to be a place where those kind of people can come together and do what a lab does, explore, use cases to try, potentially fail. Some of the things we do may not work, but if one of them works and five of them don't, if our students learn something along the way, that will be a valuable process.
James Pittman: Just for context, I mean, Vanderbilt has a program on law and innovation, so how does the AI lab fit together with the program on law and innovation? And what's the emphasis of the program versus the lab?
Mark Williams: The program on law and innovation has been around, I want to say, since 2015, 2016, somewhere in there. And a lot of what we currently have that would constitute the lab, at least as far as our launch phase. We are very much a scrappy startup. We're kind of building the rocket ship as we're flying at it. But what we found is that we already had the pieces in place within the program on law and innovation to constitute the lab and just be much more intentional about it and put them together and package them in a way that would be appealing to our students. We were looking for a vehicle like this for some time, and generative AI was obviously the because we've already had policy oriented classes on artificial intelligence here. We've had Professor J. B. Rule has a course called Law 2050 which explores many of these social questions, and that we've intentionally created AI specific courses within that program as well. What we found is we had the people, a process in place, that it made sense that this is a natural extension of the work that that program has done.
James Pittman: Mark, you've mentioned that the lab envisions forging partnerships with different sort of partners like academia, industry, the legal community, to bring projects to life involving AI. So what are the status of some of these projects?
Mark Williams: Legal aid of North Carolina is the first one that we're really intentionally. A lot of the last two months since the lab has been announced has honestly just been meeting an intake and devising plans. So now the work kind of begins. And so legal aid North Carolina, which I mentioned before, I would say is our first real project that we are doing. We also have been talking with a few law firms here in town for potential collaborations as well that I'm not quite ready to fully talk about yet. Those meetings have been taking place. Alternative dispute resolution group has also reached out to us recently to explore some potential collaborations as well, that I'm hoping we have much more to talk about that here in the future. And then we also have a plan here to meet soon with our general counsel's office here at Vanderbilt University for a similar sort of design iteration, too. So a lot of what we've done in the first couple of months is intake and meeting it with stakeholders. So I'm really excited for this next eight to twelve months where we do get to start doing the work, because a lot of this is dependent on students being involved and helping either through coursework or through internships or RA work or fellowships that we have been rolling out. So I'm really excited, especially for the next four to five months, where we can implement.
James Pittman: Well, you mentioned a bit about the coursework, but how about the internships and the fellowships? I mean, can you give us more details about what those might look like? And have any of them launched?
Mark Williams: Our students just started this week, so we are advertising them right now. And we have several ras who are working on some projects as well. But again, a lot of what we are doing literally this week and next week is matching up projects that we have in mind with people who, students who we are bringing in. So it is very much an ongoing thing right now. I hope that this time next year have many more concrete things to show you of what we've done. But unlike a business, we are beholden to the academic calendar. So I would say the most concrete thing that we have that is ongoing, that is an active partnership, is that legal aid in North Carolina project. But again, we're very much working on spinning up those other projects. We have no shortage of work, I promise you that.
James Pittman: Now, you mentioned that the lab is part of a university wide sort of consortium effort around AI. So these internships, these fellowships, will the students participating in those, are there going to be any drawn from outside the law school, or is that going to be specifically in the law school?
Mark Williams: Do you have for this immediate version of the lab? It is specific to the law school, but my long term vision for the lab and for some of the work we do is to provide pathways for students who have interest in both computer science and law to pursue either opportunities or coursework in both, because the current iteration of what we have is going to be through CAt's legal problem solving course, which does match up people with a computer science background with law students long term, I would like to create a much more intentional sandbox where those students much more sort of naturally cross back and forth. We're building those bridges right now, and that's what's exciting about being at Vanderbilt, is that the interest, the technical know how, the mindset of sort of curiosity and creativity and innovation, it's in place for we are in the process of making that happen. But the only reason we can make that happen is because we have the willing stakeholders to be able to do that. We very much have an idea in mind, but nobody's quite done a lab exactly like this before, so we don't have a roadmap either. So that makes it both a little scary and a lot of fun.
James Pittman: Yeah, it's definitely a lot of fun, and it's fascinating. And I fully understand that you launched very recently, I mean, in terms of the involvement of the faculty, aside from the coursework that you mentioned, do you see as things develop, that the lab will be able to affect the coursework and the instruction of other professors in the law school as they gain familiarity with the. Do you, aside from sort of yourself and kat who are focused on the lab. Has there been involvement of other faculty in the law school yet, or how do you see that developing that is.
Mark Williams: Really loved to each individual professor, to the extent to which they use or don't use the technology for their own work for in their classroom? One of the other pieces that was really sort of an aha moment that gave us momentum for the lab. Our dean, Chris Guthrie, had us do a two day workshop with our faculty in May of 2023 just on generative AI, what it was, what it could do, what it could not do, where we thought it was going. And that did spark a level of curiosity, I think, in our faculty that set maybe us on a path of embracing or at least exploring the technologist use cases that was maybe unique to us amongst not every law school, but I would say most law schools, in terms of putting that in the hands of faculty and having them really think about potentially ways that they could enhance and amplify their work. And shortly after that workshop, law school did pay for individual faculty members to have their own GPT plus accounts so that they can spend the summer practicing and just getting used to technology and what it can do. Because I think one of the things that Kat and I are strong believers of, and I'm always talking to our students about this, too, is simply don't get analysis by paralysis of exploring these tools. Don't worry about crafting the perfect prompt. That's much more important to get in there, start prompting, and iterate. Part of the magic of these tools is that you can iterate and follow up, you can refine. So we find that just really, anybody, if you practice with these things ten to 15 hours, you will start to understand the use cases for yourself, of what your workflow is, what it can do, what it cannot do, what Ethan Mullet calls the jagged edge of use cases, things that you think it should be good at sometimes it's horrible at, and then other use cases that you would have never imagined it could help you with. It creates whole new ways of working. So that is a lot of what we've been doing with our faculty and with our students, frankly. But that workshop really did set a lot of things in motion.
James Pittman: Have there been any examples you want to highlight of aha. Moments or epiphanies for any of the faculty members? I mean, maybe people that have been law professors for a long time, and this tool just comes out. Have you seen any of them sort of light up and say, oh, my goodness, I really see how this could be sort of utilized in my field.
Mark Williams: Yeah. Well, I mean, informally. I don't want to put anybody on notice, but I've informally had conversations several times where faculty had mentioned, even just in preparing for lectures that they might give, they will workshop ways that they might explain the lectures, things that they would do that don't necessarily constitute having to have a right or wrong answer. You're not asking it for a case, you're not asking it for a statute. They'll test out different arguments. You really helped sharpen my thinking. It was an amplifier of their expertise. It wasn't a replacement for it. They didn't spin up their entire class lecture using a large language model, but they could take what they were planning to say and maybe refine it, sharpen it. Consider alternative lines of questioning they may not have considered before. I've had multiple conversations where that was mentioned to me as a case, and then I've had others who mentioned that they would use it as a data analyst or you're just helping identify patterns. You have to be careful about the data you put in there, especially if it's just a public facing model, but using it in creative ways. And I think that's the most excited people talk to me about not trying to fit it into their preexisting workflows. But it helped me do this new thing better, or gave me more time to focus on more creative and intellectually stimulating work, because I no longer have to focus on the drudgery of these other things that used to bog me down. Those are the things that get me the most excited and the use cases that I've informally heard. But, yeah, I don't want to name names.
James Pittman: I did see some names mentioned, like Kara Suval, Jennifer Safstrom, and JB Rule.
Mark Williams: So JB is the sort of originator of the program law and innovation. So a lot of the work we do kind of falls under his umbrella, and he is somebody who's been thinking about working in the intersection of law and technology for a long time and thinking about the world of law and AI for a long time. Professor Sapstro and Professor Sulal are two people who have known for a while, but really did connect through over those workshops I mentioned, because they are both working in the clinical capacity here at the school as well. So they immediately saw the potential for ways that it could amplify clinical work, whether even if it was just through intake, not necessarily a replacement for the work that they were already doing, but again, being able to do more work better, because one of the things that we often find. There's been several studies now that regardless of whether you're a lower tier knowledge worker or a higher tier knowledge worker, we know, even right now, even if the models never get better, and regardless of what they're, we know that it helps everyone do their work faster and a higher quality level. And then if you're a lower level performer, we know that it can dramatically raise the quality of your output as well. If you're actually a higher performer, it may lower you down a little bit, which is kind of interesting right now. But regardless of your performance level, we know it helps you do it faster. People reports higher levels of satisfaction. So tapping into that, not as a replacement for the core work of the clinic and what the students are there for, but to, if anything, help them use it to get other things out of the way so that they can focus on their work more, is what they immediately recognize. So we partner with them, even if it's just an educational component. Where I come talk to those clinics right now about use cases, and then some of my most exciting interactions with students have been after I have gone into those clinics. And then we would come back and work on a problem that they were having, that they might be working on. So in Professor Safstrom's clinic in particular, we had students who were preparing for a deposition. Now, they had already done all of the deposition work. They were already experts on this case. They had already done their legal research. But then afterwards, they came to me and we used GPT four to do what we call the Persona prone. And so they asked the model basically to take on the Persona of the person that they were going to eventually question. And they say, you are this person. And so whenever we asked you questions, like, answer as though you were this person, and it helped them come up with new ideas, new issues they may not have previously considered or ways to ask questions. It just opened up a whole new line of creative thought. But the only way that they would have been able to get to that point, to have that interaction, is because they were already experts. They already knew the case and had done the work, and they were using this to give themselves essentially superpowers to take their line of questioning to the next level. And that's exactly the kind of work that I hope to explore the most with our students. When I talk to attorneys, when I think about the future of legal work in general, it's that amplification, not necessarily fitting it into our preexisting workflows.
Mark Williams: That's the most exciting well, let me.
James Pittman: Ask you to get a little bit visionary, if you will. And in your more sort of expansive moments, your more prophetic moments, how do you envision the future roles of legal professionals evolving as the AI integration progresses? I mean, what general trends would you expect to know?
Mark Williams: Kat and I often talk about we don't have a lot of answers. If we did have the answers, we wouldn't need a laugh, right? So a lot of our work, especially over the next two to three years, is to help us understand and define what the questions are to answer that exact question, because I'm almost out of the prediction game in terms of AI, because if you told me that we have the features we have now, at this time last year, that would have seemed so futuristic to me, I wouldn't have even understood what you're talking about. So just keeping up with the rapid pace of change and equipping our students with the mindset to be able to embrace that change and to not run from it, that's the number one thing. Because, yeah, I think at the very least, the thing we have to contend with is a lot of what these things are really good at just right now are those really automated tasks that we often train attorneys on early on in their careers. So if those tasks, those kind of road tasks that we normally train on to sort of get up to speed as sort of legal proficiency go faster or just completely are automatable, what then do we do with our time? I don't think it's a matter of lawyers are going away. I just think the nature of work that the lawyers do going forward, maybe we're just doing a higher level of work now that some of this judge work is out of the way. I'm not the first one to come up with this analogy. Somebody else has mentioned this word, but I think about the calculator and the introduction of the calculator into great schools in the consternation that I was a child of the 80s, so I was a product of this. What are the entry points for using a large language model? Just like what are the entry points of using a calculator? When is it appropriate to bring the calculator in to help you do a certain level of calculation? And when is it appropriate to set the calculator aside? Because you really need to struggle with this problem in a manual way to gain true mastery. And I think a lot of what we have to figure out from the legal profession is, even if it's just writing alone, what writing tasks are worthy of our time because Pablo Ardondo from case text often talks about this, that writing is thinking right? And so there's still something very valuable about that, that when is the struggle of the blank page worth our time? Because we look to achieve a certain level of mastery of this topic, and the struggle is beneficial in that way, even if it sucks, even if we hate it, versus when is the blank page a complete waste of time, and what writing tasks and set aside that really should just be wrote? Automation. And sometimes those are the same things. Like, you struggle with that for a while, to the point that you can take it for granted and not have to struggle with anymore. It's suddenly a task that you needed to learn, becomes something that you can then train an AI model because you have a certain level of mastery that you feel comfortable evaluating its output. Those are the things that I don't have answers for, but that's the framework of which I'm working on, if that makes sense.
James Pittman: At mark, you piqued my curiosity with your anecdote about the Persona prompt. So for our listeners, are there places that you can recommend for lawyers who want to try some of these things out and go to sort of learn about the various prompts or best strategies for prompts?
Mark Williams: One of the projects I specifically working on under the auspice of the lab is developing a prompt course for attorneys, and there are other resources out there as well, but I am developing one that will be on the are you familiar with the learning platform Coursera? There's a few of. So I literally just yesterday signed my agreement with coursera to develop this course. But even you don't have to wait around for my course, even if it's not law specific. So I mentioned a lot of the partnerships we have here at Vanderbilt and people doing great work at Vanderbilt. One of the people that we have partnered with early and often, his name is Jules White, and he's that computer science professor here at Vanderbilt. He was one of the first folks over the summer that launched a pump engineering course, and you can also find that on Coursera as well. He's got a whole suite of courses now. Highly recommend that anybody doing any kind of knowledge work, whether it's legal or not, go check those out. I certainly learned a lot, and it informs a lot of what I'm about to do with the legal specific course. Learning different patterns, learning how a large language model sort of access its training data, and how you can sort of tip it to go deeper into its thought process and improve outputs. Those courses that he does are really helpful in sort of strategic thinking in that way. One of the examples he'll often say is there's a version of a prompt called Fusot prompting, where you essentially give the model a couple of examples of outputs that you would like to receive and then say, so from now on, I want you, anytime I give you a question, give me the output in the style of the examples I've just given you. And what you've pointed out is that what you've essentially done right there is you've written a computer program. You think about what a computer program is. It's a set of instructions that you then want the computer to repeat over and over. But you didn't use Python, you didn't use C, you didn't use JavaScript, you used English. And so that's a lot of what the future of this is interesting to me. Again, I'm rehashing quotes that other people have said much smarter than me, but the new programming language in a lot of ways is English because of interactions like that. So that's one avenue I suggest people go to. There are others as well, but I'm biased towards my Vanderbilt colleagues when I mentioned that. And then also Ethan Wallach, who is a Wharton professor. He has a substac called one useful thing that he just does a lot of great writing on and has produced a few studies as well about just the impact and use cases for AI and knowledge work in general. So it's not just legal, but you can pretty quickly extrapolate to your own personal use cases as a legal professional, types of things that he talks about. So there's people coming out all the time with great resources, but those are two that I lean on a lot in my personal day to day work.
James Pittman: Well, that's really helpful, and thanks a lot for that information, Mark.
James Pittman: A lot of our listeners are immigration lawyers, and they're interested in not only their own work as attorneys, but the possibility of use of AI in adjudication. And we do see examples coming in from Europe especially, that AI is beginning to be used in some forms of adjudication, like political asylum claims for an initial determination. So that always brings up the topic. Know biases in adjudication, and immigration lawyers in general get concerned that adjudicators, whether they're human adjudicators or in the future machine adjudicators, can have biases of various kinds, which become relevant, especially in immigration. But what thinking have you done about the role of AI in adjudication right.
Mark Williams: Now, this current iteration of the lab, we are very much more focused on the law for AI and less, so much law of AI in terms of getting into policy questions like that. We do have folks here at the law school who are thinking about those things in a more robust way. But what you also find is, even if we're just trying to stick to more practical applications of law or of AI, and it seems to light, you inevitably find that it bleeds into these more policy oriented questions. So I am not somebody who is a lot of immigration expert, and I'm not a jurisprudence expert. So I do not want to venture too much into fields where I'm not qualified to speak, but I do. When I hear other people, other smart people who are thinking about this in a more doctrinal way, it is interesting to watch us all wrap our minds around what level of bias is acceptable to us, because it's not a binary. This is biased. This is not biased. Right. We know notoriously that algorithmic bias, particularly as a limit, the training sets that these models have been trained on will, they're just a reflection back to us of our human biases. And so that gets spit back to us in all kinds of interesting ways. But as you said, the people who, humans who are adjudicating these decisions are also very biased. I find that the latter discussion is what percentage of bias is okay, and what level are we willing to accept? I find those frameworks and discussions interesting. I cannot be the one who gives other people direction on how they should think about that. But I do find it interesting that you see that discussion being set up because the human will say, well, we can't trust this AI model to adjudicate it's biased. Okay, but then we have all these benchmarking ways of sort of pointing out well, but, yes, so is the human judge probably more biased? So that's the line of inquiry that is interesting to me. But again, I'm not the person qualified to be pontificating for how others should feel about that.
James Pittman: Thank you.
James Pittman: But more in terms of the whenever you have innovation, whenever you have change, especially something as monumental as the AI revolution, and we're just on the leading edge of it, you'll have various forms of resistance to change. And those arise zone discomfort. They arise because of systemic factors. How does the idea of resistance to change, how are you going about addressing that in terms of your dealings with the faculty, dealings with students and with other interested parties?
Mark Williams: That's what's so interesting about this technology, and I'm not going to preface this to our faculty or our students. Because if anything, I feel like we've got a group here that's been very embracing of change, and some of that's just through the program on law and innovation, even before generative AI came along about talking about embracing mindsets of curiosity in a technology driven society, but just the legal profession in general, I think we know where we're a risk averse people, right? A lot of us wouldn't have ended up in law school otherwise. It was kind of the safe choice to do so. From that perspective, it is very interesting when I talk to knowledge management workers at large law firms or attorneys or whoever kind of uniformly say to me that they've never seen the industry snap to attention to a technology the way they have with this. It's truly different. And the surveys that they've shown, because they tend to bear this out. So whether it's. There's been several surveys by different companies over the course of the last six months or so, and they all show that the legal profession has a higher level of awareness of generative AI products and tools than the average citizen. Where it's like an 80% rate have heard of it, doesn't necessarily mean you've used it, doesn't necessarily mean that you plan to use it, but you're hyper aware of it, right, where it's about a 60% rate for the general population. And the surveys tend to also show that there's at least more so than previous versions of technology, openness or willingness to understand that this will have an impact on the way they do work. And that makes sense, because when you said before, our stock and trade is words, and this is a calculator for words. So that has to play some kind of a role in the way our work goes forward, even if it's not the practice of law per se. I think you already talked to, when I talk to people who get our knowledge managements or competitive intelligence, or people who are in the business of law, they are already using this stuff every day. Even if all this did was just summarize stuff, it was just a summarization machine. That alone, I think, has already proved to be an invaluable use case for so many people that are out there, even if the models never got better, and that was the only task it ever did, it's already been revolutionary on that point.
James Pittman: Have you seen it affect, I mean, in law firms, for example, in the summer associate program, let's say summer, it's going into a firm, are usually expected to do some of those tasks, like summarization legal research coming up with a draft of a portion of a brief or something of that nature? Have you seen the firms adopting it and allowing, let's say, associates or summer associates or interns utilize the AI in the firm as such? Have we reached that point?
Mark Williams: I feel like there has not been a uniform response on that. Somebody else will have to measure that more. Through my anecdotal conversations, I will talk to one firm, and they're trialing things. They have it amongst their innovation group, but they may be hesitant to deploy it firm wide just yet. But again, some of those conversations were taking place over the summer before tools like Lexus AI came out or sort of more law specific bespoke models that people are more accessing now because now our law students have access to Lexus AI. So the calculation changes every week as to who should get what and when and how should they use it. It does not seem to be a uniform response. I think everybody is figuring that out right now. It'll be interesting to see this summer when this year's worth, this crop of graduates goes into their firms. Now that we've had a year under our belt of this technology, what the approach is, because everybody is looking at it, everybody's playing around with it. I know some firms have deployed it much more firm wide than others, where there may be a little bit more risk averse, and I don't think there's one answer to that, at least through my anecdotal discussions.
James Pittman: Well, if we can, let's drill down a little bit on the access to justice angle. And you mentioned your collaboration with North Carolina legal aid. The access to justice is one of the things that's very exciting about AI. I mean, we have a huge problem in this country with the affordability of legal services and so forth. We all know that. How do you see the AI currently making a difference?
Mark Williams: That could go a lot of different ways, because I view generative AI, really AI, and generative AI particularly, it's like electricity. It's a general purpose utility. So there's different ways that it can sort of run through all of the work that we do that doesn't necessarily involve dealing with a chat bot interface. Right. But I do see we're in a phase right now where we still get to decide how that plays out. I don't know that there's an answer to it yet. One of the interesting things that I've seen so far, as you see some of the fifth Circuit, but other areas as well, where well meaning court rules coming out sort of requiring either, whether it's attorneys or pro sale litigants to certify whether they've used AI or not. And I understand the impetus behind those orders, but oftentimes the definitions of them are too broad or ill defined to the point where you couldn't even use a Lexus or Westlaw. You couldn't do a Lexus or Westlaw search the way that they defined AI. That is really specifically, there's a few people out there keeping track of orders where judges have either penalized or rejected cases because it was clear that the pro se litigate had used Chachi Bt to craft the documents that they submitted, or maybe even it had submitted fake cases. There was a few instances here, but the thing to me is, particularly with pro se litigants in that instance, is what is our sort of, we still get to decide this, but a lot of the issues that they will bring to a court don't fall apart because they don't have a legal claim. They fall apart because of bad writing, because they're unable to articulate what their legal problem is. It's not that they don't have one. It's that they're just not capable of saying what it is. And that to me, is a place where this technology, whether it's going straight to a court or being able to articulate to a legal aid clinic that using it as a translator to sort of suss out like, well, what is this problem this person is really having? Even if it's not the final end product, its ability to do that sort of triage is really interesting to me. But again, there's a lot of caveats to that. There's a lot of policy responses. There's a lot of discussion that we'd have to have before you got to that end result. But I fear the approach that we've taken now cuts that conversation off before we can even have it, if that makes sense. So that is the part that's really interesting to me. And again, a lot of, when you talk about access to justice, and there's too many people with a legal problem or don't even know they have a legal problem and not enough legal services to help reach those people, I wish Professor Moon was here with us because she is able to rattle off the statistics in a much more aggressive fashion than I can. But we know, and we've measured for a long time that there's a lot of people out there who have some kind of a legal need that aren't able to articulate it, have a legal problem and aren't able to access legal services. And it's an intake problem, or it's a mechanics of law problem as much as it is like the actual claims themselves. And these tools could certainly help with that in ways that we may not even be able to anticipate right now. But that one is interesting to me. In particular, I have a problem, but I don't know how to articulate it. Can I work with this model to help refine what it is? I'm exactly trying to say if that makes sense at all.
James Pittman: Yeah, it makes a lot of sense, and it's very important for our users in the immigration law community because certainly analyzing a given fact pattern and determining your clients potential course of action, there could be several pathways and also taking stock of all kinds of factors in the potential client's background. Those are all important things where AI language models could potentially be very helpful.
James Pittman: I wanted to ask you a little bit about the ethical angle. Is the lab currently sort of implementing any guidelines to try to develop best practices in terms of ethical use of AI? And related to that, you mentioned someone keeping track of those court orders that come down, but are you aware of any sort of initiatives, especially by practitioners or by the organized bar, to sort of develop ethical guidelines around generative AI?
Mark Williams: Yes. Right now we have been more in the educational component rather than the developing policy component or participating in this discussion. We just did a Cle here locally for attorneys that Professor Chaval was a part of talking about ethical guidelines as we currently know California. I'm sure you're aware California has released some of their guidance in Florida as well. It's interesting to compare and contrast those. The California at least proposed rules seem to line up and be largely influenced by the work that the group out of MIT has been doing, which I'm sure you're familiar with. So we have mostly been in the educational mode of keeping track and keeping the attorneys that we talk with educated about that, rather than maybe proposing a policy themselves, although we are certainly actively trying to participate in those discussions as well, especially Professor Saval is more of the professional responsibility expert. But it's interesting to me there is the sort of, what are the bar association guidelines rules kind of ethics piece, but then there's also the much more esoteric sort of long term philosophical ethics pieces of models and their use and how they are trained and who is responsible for harmful outputs, things along those lines, especially when you started getting started, things like AI agents and who is culpable for. Right now, the AI agents don't really work in a way that I'm super worried about them causing harm in a way that we have to determine who's liable. But that is something you could see future use case all entire practices of law spinning up based around. Well, I was harmed by the actions that your model that you devised took, and we need to just decide who is owed compensation here and who is responsible. Those are things I don't have the answer to, but it definitely runs the gamut there. But right now it's mostly just about educating students and attorneys that we talked. We kind of view ourselves as AI extension here, like our local AI farm extension office where we can serve as a local resource and host events. So you can come get your CLE credit and also learn about sort of the current state. But because I've been teaching enlargement technology for a while now, there is no one uniform approach across any state right now. Right. And I'm reminded a lot of social media ethics rules or any other technology rules where major jurisdictions would often have really conflicting guidance on how they approached different sets of rules. So the main thing right now is just educating everybody that you really have to be hyper aware of what the rule is, where you are, because the ABA model rule may not line up with, or at least the interpretation may not line up with where you are. We've seen that happen with previous technologies in the past. So that's, again, another one where you talk to me two years from now. I much have a much more robust answer for you, but it seems like as a group, as a profession, we're still very much trying to figure out what does it even mean when we say ethics for these models, if that makes sense?
James Pittman: Yeah, no, and I just really think it's important for practitioners, as you said, first priority, get up to speed, get educated, but then also be a participant in developing the policy. And I'm just very interested in following what the fora are for developing this policy and how practitioners can really have input. So the whole regulation and policy angle gets off in the right direction.
Mark Williams: Yes, I think that's the number one thing right now, is that we still get to decide all of this. It's not a backwards looking fate. And people can get involved and shape this in really substantive ways because the executive order from the White House on all of this stuff is just a month or two old. Right. We're all figuring this out all at the same time and we still get to decide.
James Pittman: Yeah. Now, Mark, that Cle that you mentioned, is that something that people outside the law school can avail themselves of, or is it only that particular one was not reported?
Mark Williams: But again, we are very much in the business of education and running those kind of events. And so, yeah, we are open for business, for anything along those lines, for sure.
James Pittman: Last question for you, Marcus. We get to the end of the hour. Have you heard of interest from other law schools, other institutions that want to do something similar to what you're doing? And do you have any plans to serve as a model?
Mark Williams: We're the first that I know of that have constituted an AI group. That's quite what we're doing now. There are other law schools out there that have AI centers, maybe more policy oriented, and certainly other innovation groups as well. But, yeah, that's part of the great thing about the law technology community is that it's very collaborative, and we would not be able to do what we're doing if we didn't have help from other law schools as well, and then vice versa. So that's the nice thing about what we do, is that we may be the first in this exact iteration, but I'm sure we will not be.
James Pittman: For sure not. Mark, it's really been a fascinating discussion. It's so exciting what you're doing there. Vanderbilt, and I'm sure that we're going to have you back in the future to follow your progress. So thanks very much for your time.
James Pittman: And this has been Mark. That's Mark Williams with Vanderbilt Law School's AI law lab. Mark, thank you for joining us.
Mark Williams: Thank you very much. Appreciate it.