Don’t panic: Your AI co-worker is here to help

Jake Heller, head of product for CoCounsel – an AI bot from multinational information conglomerate Thomson Reuters – has a tip for anyone selling AI software: Don’t sell software.

Rather, he argues, sell AI as a co-worker. Corporations understand the cost of people, but not code, he reasons – and it has worked for him at least.

As he recounted at the GenAI Summit 2024 in San Francisco last week, he was one of the founders of CaseText about a decade ago. When the startup began, it offered a crowdsourced legal database for legal research.

Then about two years ago, he was among those granted early access to OpenAI’s GPT-4 and used it to develop CoCounsel, an AI legal assistance tool that debuted on March 1 last year, two weeks before GPT-4 made a limited public debut. A few months later, CaseText was acquired for $650 million by Thomson Reuters.

Heller argues that “building AI co-workers is going to be the largest opportunity of tech in our lifetime, measured by impact and measured by the amount of revenue, amount of money you can make building these kinds of technologies.”

Jake Heller, head of product at CoCounsel, sees sunny days ahead – Click to enlarge

“When a lot of people hear ‘AI co-workers,’ especially the way I’ll be talking about it a little later, it gets a little scary,” he admitted. “What jobs will there be left for humans? I think it’s actually bright rather than dim, for reasons that are a little bit different than a lot of the reasons people talk about.”

When people talk about generative AI, said Heller, they often focus on the term “generative” – but he argues that it’s the “intelligence” aspect that deserves attention. In the past two or three years, AI has got a lot more “intelligent” in ways that really matter, argues Heller, and it can take on real-world work.

Even two years ago, he said, AI models were reportedly passing bar exams – and doing so better than 70 and 90 percent of the people taking the tests. The AI did better than 80 percent of those taking the Graduate Record Examinations (GRE), he asserted.

And this is old news, according to Heller. He expects that the technology as it exists today will lead to the emergence of AI co-workers – and he points to CoCounsel as an example of the genre.

“It can accomplish legal tasks at about a human level, if not a bit better, but at superhuman speeds,” he claims. “Things that would take me, when I was a lawyer, many, many days to complete, it can do in 90 seconds.”

For example, CoCounsel has proved itself capable when asked to find evidence of fraud in a large set of email messages – a common but boring task for lawyers. CoCounsel’s results are good enough that in just a year and half on the market, 60 percent of elite law firms – the AM Law 100 – have adopted it. It’s done 1.5 million tasks for attorneys – such as reviewing documents, reviewing contracts, and other research-oriented tasks, Heller claimed.

“A lot of people look at GenAI, and they think it’s maybe hype, that there are all these kind of cool demos, but not real applications” observed Heller. “There are, in fact, real applications, real revenue for real customers, and real benefits.”

To build CoCounsel, the CaseText team defined a set of skills that a junior lawyer might need – such as summarizing documents, editing a contract under guidance from instructors, or reviewing a contract for corporate compliance. They broke those skills down into steps, developed prompts, and worked out how to combine those skills in different ways.

The basic formula is simple, Heller noted, but the challenge is ensuring correctness.

“The difficulty really is making sure you should get the right answers,” he said. “And I think part of the secret to how we became widely adopted in legal – and I think that will be true for many of the different professions and industries that you all serve – is an extremely robust testing methodology. Literally we’ve run over 600,000 tests, some of them against these individual prompts … and sometimes end-to-end, like if a request goes in and answer comes out, did [the AI model] hallucinate?”

Heller emphasized that only when his team had confidence in the correctness of CoCounsel’s responses did they feel comfortable giving them to lawyers – whose jobs are on the line if errors occur. And one way the organization helped reassure customers is by providing sources they can read with a single click, to confirm what the AI model returns.

“The goal is not to take the lawyer’s job away from them, but to assist them in all the tasks that they’re doing – especially the stuff that takes a long time, but they and their customers get the least value out of,” said Heller.

CoCounsel “was never pitched as software. We pitched it as a co-worker. The value isn’t, ‘this will make certain workflows a little more efficient, a little more effective for you.’ The value is, ‘now you have an additional member of your team.’ And that lets you price in a way that follows value.”

Heller went on to argue that he doesn’t believe that AI co-workers will end up taking jobs away from people. “I think they will be upleveled to do more interesting stuff,” he predicted.

That’s been his experience with CoCounsel, having sold it to thousands of customers. “The number of layoffs that we’ve heard about as a result of the mass adoption of this technology in the legal industry is zero,” he declared. Instead, it lets law firms say ‘yes’ to cases they’d have previously declined for lack of capacity.

That said, Heller acknowledged that what’s economically beneficial in aggregate may not be positive for every person.

“Will this be net positive for every single person in society? Probably not,” he conceded. “Even if it’s truly net positive in the aggregate, I think some individuals in some job categories and some tasks are gonna be harder hit than others and faster than others. And so, we’re going to rise to the challenge of massive potential for displacement, shifts, et cetera.”

To address that, according to Heller, we need appropriate policies. “Does America, for example, have the capacity to react appropriately to this new technology?” he mused.

“I see a lot of shaking heads. Hopefully yes. But I think this will become one of the more important policy questions of the next decade.” ®

Read More

Thomas Claburn