In an age where nearly every industry is increasing its dependency on artificial intelligence (AI) to fulfill a myriad of processes (ranging from marketing and sales to product development, engineering, supply chain management, and much more), a new definition of how business is “conducted” today emerges. In the case of generative AI (genAI), it’s one that promises optimized workforce efficiency, slashed expenses, and accelerated research efforts for C-suite executives.
However, it also leaves room for potential blunders.
GenAI algorithms leveraging machine learning and unvetted, public information to create new content could perpetuate biases, inaccuracies, or prejudices that exist within the data these models reference. Ultimately, this leads to outputs that, if used by legal professionals, could have serious business consequences.
This has led one legal business in EMEA to not only build an in-house genAI model but also establish policies that compensate clients for genAI’s wavering reliability. Orbital Witness is the first legal tech company to “offer customers a specific policy guaranteeing them that if its genAI solution gets something wrong, they can seek redress via specially designed insurance coverage,” Artificial Lawyer states.
Will other industry leaders follow suit, becoming more accountable for how they wield this powerful technology? A few other questions arise as to how legal titans can safeguard themselves against faulty genAI practices: where has genAI fallen short historically? What legislation already exists to protect consumers from genAI inaccuracies? How are corporations taking this matter into their own hands?
Below, we answer these pressing questions and more using the AlphaSense platform.
The Shortfalls of GenAI for Legal Professionals
Due to its ability to generate highly realistic and complex content that can mimic human thought, creativity, and speech, genAI has stirred up palpable excitement for use cases across industries. According to Reuters, “law firm professionals chose legal research as their top potential genAI usage case,” in addition to document reviewing, memo drafting, and document summarization.
And yet, some of its current applications have already drawn sharp backlash.
Last year, the Washington Post laid out the reality of lawyers relying too heavily on genAI tools. As overworked lawyers turn to ChatGPT to quickly write tedious briefs, examine and synthesize piles of case documents, and review memos—work traditionally done by paralegals and associates—costly errors have become more abundant.
Certain AI chatbots have a tendency to generate inaccurate information, which has led to consequences such as lawyers being dismissed, facing financial penalties, or having their cases dismissed. For example, in June 2023, a federal judge levied $5,000 fines against two attorneys and their law firm, Levidow, Levidow & Oberman, P.C., in a groundbreaking case where ChatGPT was held responsible for their inclusion of fabricated legal research in an aviation injury lawsuit.
A similar case occurred in January of this year when an attorney used ChatGPT for research in a medical malpractice lawsuit and did not confirm that the case she cited was valid.
While genAI cuts down on time and money spent on a handful of laborious processes, it often lacks transparency in how it generates an outcome. Without the ability to learn where a model pulls its reference data or how it uses it to formulate a response, legal professionals are left unable to justify courses of actions genAI may suggest to their clients.
New Legislation for the AI Era
This past July, the European Union (EU) passed the Artificial Intelligence Act (EU AI Act)—a landmark law that “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.”
The EU AI Act adopts a risk-oriented framework for overseeing every stage of various AI systems’ development and deployment. Entities failing to adhere to the EU AI Act could face severe financial repercussions, with penalties reaching up to EUR 35 million or 7% of their global annual revenue, whichever amount is greater.
The legislation also sets forth requirements for all parties involved with AI systems that have a connection to the European market, including those who supply, implement, import, distribute, or manufacture these products.
In other words, the EU AI Act applies to “(i) providers who place on the EU market or put into service AI systems, or place on the EU market general-purpose AI models (“GPAI models”); (ii) deployers of AI systems who have a place of establishment/are located in the EU; and (iii) providers and deployers of AI systems in third countries, if the output produced by the AI system is being used in the EU (Art. 2(1) EU AI Act),” White & Case shares.
Where legal corporations should take specific notice is the EU AI Act’s classification and regulation of GPAI models. A GPAI model is defined as one that, even when trained on extensive datasets through large-scale self-supervision, demonstrates broad applicability and can effectively handle a diverse array of tasks. This definition applies regardless of how the AI system is marketed and includes models that can be incorporated into various subsequent systems or applications.
This means that all providers of GPAI models are “subject to certain obligations, such as: (i) making available and maintaining up-to-date technical documentation, including its training and testing process, or providing information to AI system providers who intend to use the GPAI model; (ii) cooperating with the Commission and national competent authorities; and (iii) respecting national laws on copyright and related rights (Art. 53 EU AI Act),” White & Case adds.
Legal corporations that intend to build their own genAI models will then have to perform standardized model evaluations, assess and mitigate systemic risks, track and report incidents, and ensure cybersecurity protection as required by the EU AI Act.
Additionally, they’ll be required to notify the Commission if they become aware that their model does or will qualify as one with systemic risk within a two-week span. The Commission states it will publish “a list of AI models with systemic risk” that will be “frequently updated without prejudice to the need to observe and protect intellectual property rights and confidential commercial information or business secrets.”
By spotlighting models that possess potential risks within their algorithms, this list serves as a resource for legal corporations who rely on publicly available genAI tools. According to a recent Reuters study, nearly half (47%) of corporate legal professionals said they currently use public-facing GenAI systems or plan to within the next three years.
Corporate Protection Policies for AI
As legal companies become increasingly aware of the potential downfalls of genAI and legislation is enacted to protect consumers from them, some are taking matters into their own hands when it comes to protecting consumer reputation.
As mentioned above, UK-based legal tech and proptech company Orbital Witness made news by offering an insurance policy on the accuracy of its generative AI outputs. Clients will have the option to incur an extra charge—a fraction of the cost of using Orbital’s software—to secure this enhanced level of protection. So, on the off-chance clients do encounter a genAI mishap through Orbital, they won’t have to resort to their own professional indemnity insurance.
Artificial Lawyers states that “[Orbital’s Witness’s policy] shows a very high level of confidence in one’s own product. I.e. if you’re willing to offer insurance on what your product does as a software company, then implicitly you are sure of the accuracy of its results. How many other legal tech companies will be keen to offer this? It certainly sets Orbital Witness apart when it comes to genAI outputs.”
While Orbital Witness may be the first legal tech company to instill such a policy, corporations outside of the industry have made similar moves. For example, Amazon and Microsoft have committed to safeguarding users from copyright and intellectual property lawsuits that could arise from their AI services. Google has also shared that they “are providing balanced, practical coverage for relevant types of potential claims by offering multiple generative AI indemnity protections.”
Meanwhile, Adobe Firefly offers an indemnity clause claiming the company will cover any copyright claims stemming from works produced by their genAI art creation tool. Canva has also introduced its Canva Shield to provide businesses with enhanced security, greater oversight over how their employees utilize AI, and it indemnifies businesses for what its AI product creates.
These protections involve assurances that cloud and AI providers will cover legal expenses associated with potential risks, including cost related to settlements and adverse judgments resulting from copyright disputes. In recent times, copyright litigation has seen a significant increase in cases directly linked to genAI usage. This surge has raised concern amongst businesses, who are apprehensive about the implication of these legal challenges.
And while many legal corporations have yet to instill insurance policies similar to that of Orbital Witness’s, earnings calls within AlphaSense show that executives are increasingly focusing on offering genAI products and services that are accurate to their clients:
“As I shared earlier, during the extension season, we tested new GenAI experiences to deliver higher confidence for our DIY customers. This includes in-topic accuracy checks and personalized explanations throughout the filing process that help explain a customer’s tax outcome.”
– Intuit Inc. | Earnings Call, Q1 2024
“I think at the crux of how clients and prospects are thinking about using GenAI. So first and foremost, accuracy of the results is critically important. And second, the adoption of that solution is critically important. And the adoption only takes place if the accuracy levels are high enough that they’re providing you with a distinct competitive advantage.”
– ExlService Holdings, Inc. | Earnings Call, Q1 2024
“To do AI well, whether it’s GenAI or more traditional data science machine learning, you need clean data. You need clean, accurate, timely, consistent and normalized data.”
– Veeva Systems Inc. | Conference Transcript, July 2024
“We see GenAI as a big opportunity, as I said, with applications across legal, tax, risk and news end markets. We already see this technology driving deeper integration between content and workflow software, which allows us to play a larger role in the success of our customers, and in doing so, expand our TAM.”
– Thomson Reuters Corporation | Analyst/Investor Day
“There’s a lot of legal implications still to be worked through. There’s lawsuits all over the place about GenAI…And you’re going to see people start to try things, mostly internally first, kind of quietly because of the — legal implications of exposing GenAI models to your customers.”
– NetApp, Inc. | Event Transcript
Staying Ahead in the Evolving Legal Landscape
As new market intelligence tools leveraging genAI emerge, it can be difficult to decide which option will provide the most efficient, effective, and accurate responses to your queries.
Unlike other consumer-grade generative AI tools trained on publicly available data, AlphaSense takes an entirely different approach. Our leading market intelligence platform equips you with an extensive universe of content layered with AI search technology. Access thousands of premium, public, private, and proprietary content sources—including broker research, earnings calls, expert calls, regulatory filings, and more—in seconds, for the most comprehensive due diligence and informed decision-making.
Eliminate noise and surface the intelligence you need—start your free trial of AlphaSense today.