How Lawyers Might Unlock the Secrets of AI
There is a prevailing anxiety in the legal community regarding Artificial Intelligence, and for good reason. We hear stories of lawyers citing non-existent cases generated by chatbots, or we worry about the "black box" nature of algorithms making decisions without transparency, especially where those decisions concern sensitive client data or any meaningful decision-making at all. But if we look past the hype and the fear, we find something surprisingly familiar.
At its core, a Large Language Model (LLM), which is what we’re really talking about when we say “AI” is not a truth-seeking entity, because it does not evaluate facts and logic to reach a reasonable conclusion. It is simply a completion engine. It predicts the next likely word based on the words that came before it, and it it’s designed to please. It doesn’t like to return the response “I don’t know” because that stops the conversation. When it fails, what we call "hallucinating" or simply producing generic drivel, it is often because the instructions it received were vague. It lacked clear boundaries and/or directives.
This is where the legal profession has a distinct, and very dramatic advantage. Rules and semantics are our stock and trade.
Consider how a layperson asks a question versus how a lawyer asks a question on examination, or during deposition, or when she drafts a document. If you ask a layperson “Do you know what time it is?” (grammar issues aside) the answer is often “It’s XYZ o’clock.” If you ask the same of an attorney, the answer is hopefully a simple “yes” or “no.” Accordingly, a layperson might ask an AI, "Write a will for a married couple." The AI, relying on probability, will produce a generic, likely unenforceable document filled with platitudes.
A lawyer, however, knows that "a will" is a meaningless and ambiguous term without specific parameters (like jurisdiction, to start). In our practice, specifically in areas like estate planning (my primary practice area), we are trained to think in rigid structures and answer very clear questions:
Definitions: Who are the Grantors? Who is the Trustee?
Directives: What are the specific powers of appointment?
Contingencies: What happens if a beneficiary predeceases the Grantor?
I’m not bringing up the Rule Against Perpetuities
This rigid, structured thinking is exactly what "prompt engineering" should be. Instead of “vibe coding” its concrete drafting. It is the application of strict guidelines to language to produce a specific, reliable result that can be consistently applied.
To unlock the power of AI, we must treat the prompt box not as a search bar, but as a delegation memo to a very literal-minded junior associate. This associate has read every book in the library but has zero common sense and will fabricate facts to please you if you aren't careful. If you don’t believe the comparison, go grab a brand-new attorney and give them super ambiguous instructions. They’ll be sweating before you finish the sentence.
If you tell a junior associate, "Research privacy laws," they might come back with anything, and they will very likely come back with something super unhelpful. But that’s not the fault of the new associate, because you gave poor instructions. If instead you tell them, "Summarize the implications of the Florida Digital Bill of Rights for a small law firm storing client data locally," you get a useful memo. (And it was a useful memo)
The same logic applies to LLMs. To get reliable performance, we must apply the same rigid rules of construction we use in our own legal thinking and working:
Role Definition: Explicitly tell the AI who it is (e.g., "You are a senior Florida estate planning attorney.").
Fact Integration: Provide the immutable facts (e.g., "The client is a solo practitioner with digitizing goals...").
Negative Constraints: Tell it what not to do (e.g., "Do not infer facts not in evidence. Do not use flowery adjectives.").
The irony of the "AI Revolution" is that it may not replace lawyers the way many consumers might jokingly(?) hope, but rather amplify those of us who are most obsessive about language. The sloppy drafter will get sloppy results from AI. The precise drafter will be able to wield these models to analyze thousands of documents, draft complex provisions, and organize vast datasets with unprecedented speed.
We have spent our careers training our brains to close loopholes and define terms. We didn't know it at the time, but we were training to speak the native language of Artificial Intelligence (or at least LLMs). And of course, always exercise extreme caution and follow all rules, regulations, and ethical obligations regarding the use of AI in practice.

