Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites

American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Enterprise AI World Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Unisphere Research




Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Information Today > Spring 2026

Back Index Forward
SUBSCRIBE NOW!
Information Today
Vol. 43 No. 1 — Spring 2026
LEGAL ISSUES
AI: Challenges in Design and Data Liability
by Amber Boedigheimer

LINKS TO THE SOURCES

The White House: Ensuring a National Policy Framework for Artificial Intelligence
whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy

Bloomberg Law: “The FTC’s Inquiry Into Chatbots Signals Broader AI Crackdown”
news.bloomberglaw.com/legal-exchange-insights-and-commentary/the-ftcs-inquiry-into-chatbots-signals-broader-ai-crackdown

RAILS: “What the Megan Garcia Case Tells Us About AI Liability in the U.S.”
blog.ai-laws.org/what-the-megan-garcia-case-tells-us-about-ai-liability-in-the-u-s

Bloomberg Law: “Google, Chatbot Maker to Face Bulk of Suit Over Teen Suicide”
news.bloomberglaw.com/litigation/chatbot-maker-to-face-bulk-of-mothers-suit-after-teens-suicide

Bloomberg Law: “AI-Faked Cases Become Core Issue Irritating Overworked Judges”
news.bloomberglaw.com/legal-ops-and-tech/ai-faked-cases-become-core-issue-irritating-overworked-judges

The National Law Review : “Lawyers Continue to Get in Hot Water for Citing AI Hallucinated Cases”
natlawreview.com/article/lawyers-continue-get-hot-water-citing-ai-hallucinated-cases

Cabarrus County Revised Administrative Order 25-09
nccourts.gov/assets/documents/local-rules-forms/Filed%20AI%20Revised%20Admin%20Order%202025.pdf

European Commission: AI Act
digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

In December 2025, President Donald Trump issued an executive order aimed at overriding state-level AI regulations. But courts and consumers have continued pushing accountability forward despite the federal pullback. Specifically, the executive order sought to challenge or invalidate state-level AI regulations that the administration viewed as burdensome to innovation. It also framed AI leadership as a national priority and instructed agencies to streamline rules that might slow AI development. The administration emphasized uniformity over the patchwork of state laws emerging across the country.

Major court rulings from 2025 have provided AI companies with clear guidance on the issues that need to be addressed in the future, including AI system design and training data. Courts have made it clear that AI design choices can cause actionable harm—and companies can be held responsible.

AI DESIGN LIABILITY

AI design liability is a huge problem. Lawsuits have claimed that chatbots have manipulated vulnerable users’ emotions, leading to violent acts and tragic suicides. For example, in Garcia v. Character Technologies, Inc., a case was brought against Character Technologies, Inc. when a 14-year-old died by suicide after interacting with a Character Technologies chatbot. The U.S. District Court for the Middle District of Florida’s May 2025 ruling against Character Technologies’ motion to dismiss was a game-changer, signaling that traditional tort law is now expanding its reach to include generative AI. In Garcia v. Character Technologies, Inc., questions came into play as to whether or not AI systems are interactive computer services protected by statute or products subject to product liability standards. If AI is treated as a product in this case, plaintiffs may be able to argue that the “design” of the AI model was defective or that the company failed to provide adequate warnings about foreseeable damages.

AI DATA RELIABILITY

AI data reliability also remains an issue. In 2025, there was a significant increase in AI-generated hallucinations in legal research, leading to various legal decisions and sanctions for using nonexistent, hallucinated cases. “There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025,” a Bloomberg Law article titled “AI-Faked Cases Become Core Issue Irritating Overworked Judges” reports. In June 2025, The National Law Review published an article about a database that has identified 156 lawyers who have been caught citing hallucinated cases.

Several attorneys have been sanctioned for submitting fabricated cases in federal court. In 2025’s Byoplanet International, LLC v. Johansson, in the largest such sanction to date, the Southern District of Florida imposed nearly $86,000 in sanctions against plaintiffs’ counsel for entering nonexistent cases, fabricated quotations attributed to real cases, and other hallucinated legal authority. In the Wyoming District Court in early 2005, attorneys cited eight fabricated cases in a filing and were sanctioned $5,000. In a 2025 Utah trademark case, an attorney was sanctioned for submitting filings that included a fabricated legal precedent.

The filed AI Revised Administrative Order 25-09 was issued by Senior Resident Superior Court Judge Martin B. McGee and sets out how AI may be used in Superior Court proceedings in Cabarrus County, North Carolina (Judicial District 25). The order responds directly to concerns about AI hallucinations, authenticity, and the reliability of AI-generated materials. Furthermore, it establishes clear duties for lawyers and litigants. Lawyers are responsible for disclosing whe­ther AI tools were used in preparing filings or evidence, validating any AI-generated content for accuracy before submitting it to the court. They must retain documentation showing how AI outputs were generated so they can be authenticated, if challenged, and maintain technological competence.

AN ACCOUNTABILITY-DRIVEN ERA

The revised order makes Cabarrus County one of the most proactive jurisdictions in the U.S. in regulating the use of AI in courtrooms. By anticipating the risks posed by deepfake evidence, the order addresses both present and emerging challenges. Given the rapid adoption of AI tools across the legal profession, these rules are likely to influence other states to more closely regulate AI use in judicial proceedings as well.

As of 2026, AI companies are moving into a far more accountability-driven era. Governments in the U.S. and Europe are rolling out new rules that force AI makers to prove that their systems are safe, transparent, and not misleading to consumers. These changes directly impact the way chatbots, copilots, and automated decision tools are built, tested, and marketed.

HOW LAW LIBRARIANS CAN HELP

Law librarians can play a vital role in identifying and mitigating AI-generated hallucinations in legal research by conducting thorough reviews of AI-generated outputs to ensure accuracy and reliability. They can utilize advanced search techniques to track down sources of information to cross-reference AI-generated content with established legal standards and provide training and resources on how to effectively use AI tools while maintaining legal standards. Law librarians can also advocate for policy guidelines that promote the responsible use of AI in legal research.

Law librarians are positioned to teach users to understand how and why hallucinations occur, distinguish between research tools versus authoritative sources, and use AI outputs as starting points, not final answers. Furthermore, law librarians often help courts, firms, and schools draft guidance on when AI tools may be used in legal research, including required disclosure or citation checks and documentation of verification steps taken after AI use. They can evaluate AI platforms for transparency, ensure citation reliability, recommend tools that integrate primary law databases or provide traceable sources, and warn against the lack of accountability in AI.

AI ENGAGEMENT

AI is here to stay, and we can’t ignore it. The real question is how we adapt to it. Its implications for our livelihoods tend to fall into a few key areas, including task displacement, skill evaluation, accountability, power, and access issues. Ignoring the developments in AI allows it to shape our livelihoods without guardrails. Engaging with it critically and deliberately, however, creates opportunities for us to define standards, protect our workers, and ensure that it supports, rather than undermines, professional integrity.

Amber BoedigheimerAmber Boedigheimer is the librarian for the Linn County Law Library in Albany, Oregon. It is a very small law library, serving about 600 patrons a year, and it is open to the public 4 days a week to provide legal information to patrons, including lawyers. The missions and goals of the library are to promote accessibility, ensure fairness within the justice system, and improve patron access to legal information. The library has a plethora of legal resources and offers patrons access to subscription databases, bar books, and other legal materials. Boedigheimer is a member of OCCLL (Oregon County Council of Law Libraries) and WestPac (Western Pacific Chapter of the American Association of Law Libraries). Send your comments about this article to itletters@infotoday.com.