Generative AI and the Value of the Process

Written By John E. Grant  |  Productivity  |  0 Comments

Several months ago a legaltech friend asked me to read through their in-progress e-book on the role of generative artificial intelligence in in-house legal workflows. The first several chapters were a useful background on AI and its potential uses for general counsels and their teams, but I physically recoiled when I got to the first sentence of the chapter on "How to Implement AI."

"Because generative AI is such a revolutionary technology, implementing it correctly requires special considerations and a new approach." 

I highlighted the sentence in the Google Doc and at-mentioned the author with a one-word comment, "Why?"

This led to a substantial re-write of that and several other chapters to explain why assessing and implementing AI technology should follow the same basic steps as implementing any other organizational change, things like:

  • Identify the organizational need (an opportunity or a shortcoming)
  • Determine whether acting on that need is the best use of your time and resources
  • Look at different options for addressing the need (changing people, processes, and/or tools)
  • Assess the strengths and weaknesses of each option, both in general and relative to the need
  • Test the tool using small, safe to fail experiments (possibly in a sandbox, possibly in a low-stakes live scenario)
  • Gather feedback from the experiments — both objective and subjective — to help you decide whether to expand the implementation, change it, or stop it entirely
  • Repeat

Although I've tried a number of different ways to implement AI into my own business, I've yet to find one that I wanted to stick with. Which isn't to say I won't keep trying, but so far I haven't found a tool that is genuinely helpful over the long term.

Among the things I've tried:

Asking Chat GPT to help me write articles or blog posts

This is something I keep going back to, and I keep being underwhelmed by the results. For one, I actively dislike Chat GPT's default writing style. I feel like everything it writes is trying to sell an idea instead of presenting it objectively and letting the reader draw their own conclusions. I'm not sure why I'd expect anything different from something that was trained on The Internet, but I'm still surprised at how cringeworthy its results are.

And yes, I know that this is something that you can get Chat GPT to improve once you get better at prompt engineering. I've followed the excellent work of folks like Damien Rhiel and Josh Kubicki on that topic. Even if I can get the tool to write in a style that doesn't annoy me, however, I still have two major problems with using Chat GPT to write.

First, it doesn't know nearly as much as I do about my core area of expertise. I truly don't think there are many folks in the world who have as much experience implementing Agile methods in the legal space as I do. Most of the Internet's information about Agile has to do with technology teams, and most of its information about improving legal operations takes a decidedly non-Agile approach. These are the materials that Chat GPT is trained on, and it has a hard time reconciling the two. The best use-case I have for getting AI to write about using Agile in legal is to provide something for me to criticize. That's not nothing, but it forces me into adopting an Ackchyually Guy writing style (even if I naturally find that style more than I care to admit). 

More than that, I've found that using AI to write for me — either to create a sloppy first draft or to tighten up one of mine — defeats a couple of my primary purposes for writing in the first place: to learn new things and improve my thinking.

These goals crystalized for me when I fortuitously plucked William Zinsser's Writing to Learn from the shelf of a vacation rental several years ago. It is full of examples of evocative writing across diverse disciplines (from history to chemistry to music to math) by authors both famous and obscure, all in pursuit of Zinsser's thesis that writing is the best way to engage in quality thinking. The output is important, he argues, but so is the process.

Using AI for meeting summaries

I've found something similar when using AI tools to transcribe and summarize meetings. Yes it is amazing how good voice transcription technology is, but for most meetings a full transcription misses the point; reading a transcription is far inferior to being there, and if you were there you rarely want to re-live the whole darn thing. 

Which is where the AI summaries seem useful. But I find that AI summaries tend to obscure — or worse misstate — what was actually discussed in the meeting. The first few times I used AI meeting transcriptions I foolishly abandoned my own note taking. When I went back to the summaries, I found myself struggling to recall the key points of what we discussed. So I went to the full transcript to refresh my memory. Not only was it painful to read the full text of a conversation I'd just had, I still found it challenging to pull out the meaning. It also took forever for me to re-process the whole thing.

I've gone back, again, to taking meeting notes by hand. I find the random words and short phrases I scrawl in my notebook to be far more useful than AI-generated bullet points, even if there is less total information in my notes.

A number of studies have shown that handwritten class notes promote deeper recall than typed ones; I expect future experiments will find that letting a machine do the typing for us will be shallower still. My hypothesis is that the words and symbols that come from my hand reflect the things that are most interesting to me. And because they came after the words my ears heard were processed through the messy context and unique experiences that exist only in my brain, they make better sense to me than even a more complete and accurate summary ever could.

Using AI to build a kanban board

My latest AI temptation came last week as I was helping a client design a new kanban board for their litigation workflow. While it isn't one of my go-to tools, the client was already using KanbanTool which is a perfectly capable system. 

After I was given access to the client's account, I discovered a new feature: KanbanTool's AI Assistant (powered by OpenAI) for suggesting board columns and card templates. Always game for an experiment, I prompted the tool to design "a process for handling legal matters under the Fair Credit Reporting Act." The columns it suggested seemed reasonable at first blush: Intake > Research > Analysis > Drafting > Review > Finalization > Closure. As I thought about it, however, I realized that those are really the steps to a transactional workflow, not a dispute-driven one. 

Less good were the card template suggestions. They were geared more towards how a business should comply with the FCRA than the different types of claims that could be brought under the statute, things like "Developing a FCRA training plan" and "Establishing a procedure for responding to consumer disputes."  

When I updated my prompt to say "for card types, please use different legal claims that can be brought under the FCRA" the tool got better. It suggested things like "Unauthorized use of a consumer report" and "negligent non-compliance with the FCRA." But I was hesitant to use the tool's suggestions without knowing what source materials it used to create them. Because I'm not an FCRA expert, I didn't know whether I could trust these suggestions as accurate, nor did I get any obvious way to verify them.

In the end, I went back to the "writing to learn" method — I did my own research into the FCRA to come up with my own suggested card types, and then sent them to my client (the real expert) for validation and editing. In the process I got a bit smarter about the FCRA; something that wouldn't have happened if I'd blindly accepted the AI's suggestions.

What's funny is that the client was doing his own research while I was doing mine, and he decided that we should be using the litigation template I created for Kanban Zone as our starting point. We're still working out whether to re-create that board design in KanbanTool or whether it is time to try new software. In the spirit of "start with what you do now," I'm leaning towards the former (even though I'm a solutions partner for Kanban Zone). 

There's value in the process

I've been cracking at this concept in a different context lately (albeit not in a way that's public yet). I'm writing about how legal professionals can get better at assigning and receiving client homework, and one of my suggestions is to design client-facing forms in a way that helps clients better understand their legal situation. My experience (and that of my clients) has taught me that clients complete their homework faster and more accurately when they understand the context of the request, and that explaining context pays long-term dividends in the overall client relationship.

Designing forms as an education tool instead of a transactional one changes the nature of the client's interaction with them. When the form is primarily about the lawyer getting information, then producing that information is a chore. When the form is also designed to give the client context and teaches them about what's important in their legal situation, then producing the information becomes engaging. The client is (more) eager to give because they want to see what they'll get in return. The lawyer gets better information and the client gets smarter.

It is natural for a legal professional to value the output more than the process, especially with something as seemingly boring as gathering client information. We know that we need certain information to be able to do the legal work (what we think of as the actual work). But when you look at things through the lens of delivering client value, you'll discover any number of ways to generate win-win outcomes on otherwise routine interactions.

I'll end by recommending an excellent op-ed from University of Pennsylvania professor Jonathan Zimmerman: Here's my AI Policy for Students: I Don't Have One. My favorite line: "some courses really do ask you to think. And if you ask an AI bot to do it instead, you are cheating yourself. You are missing out on the chance to decide what kind of life is worth living and how you are going to live it." 


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>