AI and Pro Se Litigation: How LLMs Are Reshaping Legal Strategy
Over the past few years, developments in artificial intelligence in the form of Large Language Models, or LLMs, have opened many legal doors for pro se litigants – some not to their benefit. Attorneys should be aware of how this technology can increase the difficulty of litigating against frivolous claims and defenses.
Individuals who cannot afford legal representation, and who do not have valuable-enough claims to hire attorneys on contingency, certainly face a daunting task of litigating pro se. In some instances, the hope that artificial intelligence (AI) tools could be used to help level the playing field has already been borne out. For example, pro se litigants have successfully used AI tools to prevent evictions and defend themselves in debt collection actions.1 Ideally, LLMs could create a win-win scenario, where pro se litigants successfully avoid losing out on valid claims or defenses, while helping attorneys on the other side of the case avoid the often frustrating task of having to figure out how to respond to poorly drafted and confusing pleadings and negotiating with parties with little to no knowledge of the law.
Unfortunately, the promise described by many AI advocates has yet to fully materialize. LLMs, whose function can simplistically be described as using vast databases of written documents to predict what words come next, have a troubling tendency to “hallucinate” plausible-sounding but inaccurate statements. When LLMs are used to write legal documents, they will sometimes create fictitious quotations, rules, cases, or statutes.
Attorneys accustomed to easily spotting spurious legal authority and relying on judges to flag obviously fake citations may now overlook fictitious authority in LLM-generated pleadings. Unlike nonsensical sovereign citizen conspiracy theories, LLM-hallucinated citations often sound quite plausible. They often cite to rules that are valid in other jurisdictions, are outdated, or are from cases at different procedural postures, and usually follow the proper case citation format. Indeed, not only have increasing numbers of attorneys been sanctioned for unwittingly using AI-hallucinated caselaw in briefs,2 but judges have even entered proposed orders prepared using AI that contain fictitious legal citations.3 With even legal professionals unwittingly using hallucinated citations, attorneys cannot assume that any citation in opposing counsel’s briefs is valid without double checking it themselves.
Insurance companies and other common targets for litigation should be prepared for major changes in common assumptions regarding pro se litigation strategy. Pro se litigants are given false confidence in their positions by LLMs, which default to validating their users’ views and positions.4 Using AI tools, the volume and complexity of pro se filings are also increasing. Combined with LLMs’ ability to make pleadings that look professional at least at first glance, even completely baseless pro se cases can now drag on far longer than could otherwise be expected. Attorneys and their clients should keep in mind that simple, easy-to-dismiss pro se cases might become a thing of the past, and litigation budgets should be made with the expectation that these cases will become increasingly lengthy and expensive.
LLM-generated pleadings might only be the first way this technology impacts the litigation process. Unscrupulous parties can use emerging AI technologies to create fake photograph or video evidence to support their claims. While once this type of technology would have been restricted to only the wealthiest parties capable of funding elaborate special-effects productions, it now is far more accessible to the general public. This danger in turn also leads to the reverse situation, where parties might use the possibility of AI-generated evidence to falsely claim that any evidence that is unfavorable to them is fabricated. Verifying whether photos and videos are authentic may now require extensive forensic investigation.
Attorneys should prepare their clients for an increase in the number and complexity of frivolous lawsuits. They may also want to consider seeking sanctions at the first instance of fictitious authority being cited in order to deter further use of AI-generated caselaw.
__________________________________________________
1Black, Nicole. “Are Lawyers Next on AI’s Chopping Block?” MSN (October 31, 2025).
2Gorelick, Evan. “Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings.” The New York Times (November 7, 2025).
3See Shahid v. Esaam, 376 Ga. App. 145 (2025).
4Bajaj, Simar. “Next Time You Consult an A.I. Chatbot, Remember One Thing.” The New York Times (September 25, 2025).
Contact
Before sending, please note:
Information on www.stites.com is for general use and is not legal advice. The mailing of this email is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Anything that you send to anyone at our Firm will not be confidential or privileged unless we have agreed to represent you. If you send this email, you confirm that you have read and understand this notice.
Related Capabilities