Regulation corporations transferring shortly on AI weigh advantages with dangers and unknowns

Artificial intelligence and robotics
Law firms rapidly migrating to AI are weighing benefits against risks and unknowns
Jul 20, 2023 8:46 am CDT
“When you think about its ability to collect, analyze and summarize a lot of data, that’s a tremendous head start for any legal project,” says DLA Piper partner and data scientist Bennett B. Borden. Image from Shutterstock.
Updated: In autumn 2022, David Wakeling, head of the Markets Innovation Group at law firm Allen & Overy in London, got a glimpse into the future. Months before the release of ChatGPT, he demoed Harvey, a platform built on OpenAI’s GPT technology and tailored for large law firms.
“As I peeled the onion, I could see that it was quite a serious matter. I’ve been involved with technology for a long time. It’s the first time the hairs on the back of my neck stand on end,” says Wakeling.
Allen & Overy were soon among the first to adopt Harvey, announcing in March that 3,500 lawyers in 43 offices were using it. Then, in March, accounting firm PricewaterhouseCoopers announced a “strategic alliance” with the San Francisco-based startup, which recently secured $21 million in funding.
Other major law firms have adopted generative AI products at breakneck speed or are developing their own platforms. DLA Piper partner and data scientist Bennett B. Borden calls the technology “the most transformative technology” since computers. And it works well for lawyers, as it can speed up mundane legal tasks and help them focus on more meaningful work.
“When you think about its ability to collect, analyze, and summarize a lot of data, that’s a tremendous head start for any legal project,” says Borden, whose firm uses Casetext’s CoCounsel AI generative legal assistant for legal research, document reviews, and contract analysis. (In June, Thomson Reuters announced that it had agreed to buy Casetext for $650 million.)
But generative AI is forcing companies to address the risks of deploying the new technology, which is largely unregulated. In May, Gary Marcus, a leading expert on artificial intelligence, warned a U.S. Senate Judiciary Committee subcommittee on privacy, technology and law that even the makers of generative AI platforms “don’t quite understand how they work.”
Law firms and legal technology companies face the unique security and privacy challenges that come with using the software and with a tendency to provide inaccurate and biased answers.
Those concerns became clear when it emerged that an attorney relied on ChatGPT for citations in a brief filed in New York federal court in March. The problem? The cases mentioned did not exist. The chatbot had invented them.
Careful, proactive
Harvey representatives did not respond to several interview requests. But to guard against inaccuracies and bias, Allen & Overy’s New York partner Karen Buzard says the firm has a solid training and vetting program in place, and attorneys are greeted with “rules of use” before using the platform.
“No matter what level you’re at — from youngest to oldest — if you’re using it, you have to validate the output or you could embarrass yourself,” says Wakeling. “It’s really disruptive, but wasn’t every major technological change disruptive?”
However, other law firms are more cautious. In April, Thomson Reuters surveyed mid-to-large law firm attitudes toward generative AI, concluding that the majority “take a cautious but hands-on approach.” It found that 60% of respondents had no “current plans” to use the technology. Only 3% reported using it and only 2% “actively plan to use it”.
David Cunningham, chief innovation officer at Reed Smith, says his company is proactive when it comes to generative AI. The company is currently testing Lexis+ AI and CoCounsel and will try out Harvey this summer and BloombergGPT when it comes out.
“I wouldn’t say we’re more conservative,” says Cunningham. “I would say we put more emphasis on making sure we’re doing this with guidance, guidelines and training and really focusing on the quality of the results.”
He says the law firm’s pilot program focuses on commercial systems where the firm “knows the guard rails.” “We know the security, we know the retention policies,” he adds. “We know the governance issues.”
“The reason we are cautious is because the products are still immature. The products still do not offer the quality, reliability, transparency and consistency that we would expect from a lawyer,” he says.
Pablo Arredondo, co-founder and chief innovation officer at Casetext, says there’s a big difference between “generic chatbots” like ChatGPT and CoCounsel, which are based on OpenAI’s large GPT-4 language model, but are trained on rights-based datasets, and where Data is secure and is monitored, encrypted and audited.
He understands why some are taking a more cautious approach, but predicts that the benefits will soon be “so noticeable and undeniable that I think the adoption rate will increase.”
New rules
Meanwhile, regulators are catching up. In May, Sam Altman, CEO and co-founder of OpenAI, called on lawmakers in Congress to regulate the technology. He first said that OpenAI could withdraw from the European Union due to the proposed artificial intelligence law, which includes requirements to prevent illegal content and disclose copyrighted works that manufacturers use to train their platforms.
In October, the White House released a draft AI Bill of Rights. This includes safeguards against “unsafe or ineffective” AI systems. Algorithms that make a difference; practices that violate privacy; a notification system so people know how AI is being used and what impact it is having; and the ability to completely opt out of AI systems.
In January, the National Institute of Standards and Technology released an AI risk management framework to foster innovation and help organizations create trusted AI systems by controlling, mapping, measuring, and managing risk.
But the public had to wait until June for Senate Majority Leader Chuck Schumer to unveil a much-anticipated strategy to regulate the technology. He introduced a regulatory framework and said the Senate will hold a series of forums with AI experts before formulating policy proposals. Then, in July, the Washington Post reported that the Federal Trade Commission was investigating OpenAI’s data security practices and whether they had harmed consumers.
Still, DLA Piper partner Danny Tobey argues there is a risk of over-regulation due to scaremongering and misconceptions about how advanced the technology is.
“I worry that regulations will become obsolete before they even come into effect, or stifle innovation and creativity,” he says.
But speaking to lawmakers in May, Marcus said AI systems must be unbiased, transparent, protect privacy and “above all, be safe.”
“Current systems are not transparent, they don’t adequately protect our privacy, and they continue to encourage bias,” Marcus said. “Most importantly, we cannot remotely guarantee that they are safe.”
Others are calling for a halt to the development of large language models until the risks are better understood. In March, the technology ethics group Center for AI and Digital Policy filed a complaint with the FTC, asking it to stop further commercial releases of GPT-4. The complaint was followed by an open letter signed by thousands of technology experts, including SpaceX, Tesla and Twitter CEO Elon Musk, calling for a six-month pause in research into generative AI language models that are more powerful than GPT-4.
Ernest Davis, a professor of computer science at New York University, was one of the signers of the letter and considers a moratorium a “very good idea”.
“They release software before it’s ready for general use simply because the competitive pressure is so intense,” he says.
But Borden says there is “no global authority” or global governance of AI. Even if freezing is a good idea, “it’s not possible.”
“Pausing AI is like pausing weather,” Tobey adds. “We have to innovate because countries like China are doing it at the same time. Still, companies and industries need to play a role in shaping their own internal governance to ensure these tools are used securely like any other tool.”
Updated July 20 at 11:20 am to include additional reports and information on the Federal Trade Commission investigation into OpenAI and Senate Majority Leader Chuck Schumer’s announcement of a regulatory framework.