AI Is NOT Your Lawyer - Your Chats Are Not Protected

By Luis Fiallos
Blog meme

"Yeah, let me run it by my lawyer real quick"

Artificial Intelligence (“AI”) tools have become increasingly prevalent across various domains of human activity. In the United States, for instance, more than half of American households have adopted AI in some form (see Bond, Trends – Artificial Intelligence (2025), at 59 - accessed March 24, 2026). Bradley Heppner, a former executive of several corporate entities involved in a $150 million securities fraud scheme, went too far in the way he adopted AI—so far that his conversations with AI ended up being exposed in federal court and used as evidence against him by federal prosecutors.

During a recent evidentiary hearing in a criminal matter, U.S. v. Heppner (1:25-cr-00503), the United States District Court for the Southern District of New York had to answer a question of first impression nationwide: whether a user’s communications with a publicly available AI platform, in connection with a pending criminal investigation, were protected by two longstanding legal principles, the attorney-client privilege and the work-product doctrine? For the reasons that follow, the Southern District of New York said no and concluded that Heppner’s chats were (1) not private and (2) not protected.

After receiving a grand jury subpoena and confirming he was the target of an FBI investigation regarding a $150 million securities fraud scheme, Heppner used AI platform “Claude” to prepare his legal defense strategy in anticipation of his indictment. Heppner did this on his own and without the advice of a licensed attorney, which played a major role in deciding the outcome of this case.

Heppner got indicted and arrested on November 4, 2025. After his arrest, the FBI, while executing a search warrant for Heppner’s home, found several electronic devices containing documents consisting of AI-generated material prepared in anticipation of Heppner’s indictment. Following this huge discovery, federal prosecutors moved for a court ruling that the AI-generated documents should be considered “fair game” in terms of evidence during trial. Heppner naturally opposed and claimed that his conversations with the chatbot should be treated the same way as conversations with a lawyer. In other words, Heppner claimed that his AI communications were privileged and confidential and should not be allowed to be used as evidence against him during trial. The court disagreed.

The court started by addressing Heppner’s attorney-client privilege claim. An individual invoking the attorney-client privilege seeks a court ruling that establishes that certain communications with that individual’s attorney are confidential. An attorney-client privilege analysis follows a three-part test, i.e., the court usually protects communications with a lawyer when three things are true: (1) the communication is private between a client and a lawyer; (2) the communication is meant to be private; and (3) the purpose of the communication is to obtain legal advice. Heppner failed to satisfy at least two, and potentially all three, elements of this test.

First, the court determined that the communications were not between Heppner and an attorney because Claude AI is not an attorney. The court emphasized that, at most, the communication between Heppner and AI was a discussion of legal issues between two non-attorneys. As such, there is no privilege to preserve. The court noted that under some circumstances a third party acting as an agent for attorneys may trigger the protections afforded by the attorney-client privilege. However, that only applies when an attorney instructs the client to communicate with such agent, which Heppner’s counsel never did. As mentioned earlier, Heppner communicated with the AI chatbot on his own initiative and voluntarily.

Second, the court concluded that all communication between Heppner and AI lacked confidentiality. The court explained that the company running Claude AI has a privacy policy that explicitly states that the company collects data on both user inputs and Claude’s outputs uses such data to train the AI system, and reserves the right to disclose such data to third parties. As such, Heppner had no reasonable expectation of confidentiality or privacy in his communications with Claude.

Third, the court found that Heppner did not communicate with Claude AI for the purpose of obtaining legal advice since Claude AI, as any other chatbot, has a disclaimer at the end of each communication that reads “I am not a lawyer and can’t provide formal legal advice or recommendations.” The court noted, however, that this particular issue presents a closer call because Heppner’s counsel argued that Heppner communicated with Claude for the express purpose of talking to counsel. However, the court reiterated that the question is not whether Heppner intended to use said communications to talk to a licensed lawyer afterwards because no licensed lawyer had instructed him to use an AI chatbot to begin with. Instead, the question is whether Heppner intended to obtain legal advice from the AI chatbot. The answer is obvious, the court said, “an AI chatbot is not a lawyer and cannot provide formal legal advice or recommendations.”  

The court then proceeded to analyze Heppner’s claim that the documents were protected by the work-product doctrine, which protects materials prepared in anticipation of a lawsuit. The idea behind this principle is that a lawyer’s mental impressions or the strategy they are planning on a particular case cannot be made available to the other side’s attorney as that would completely defeat the purpose of good client advocacy.

The court also rejected Heppner’s claim for relief under the work-product doctrine. The court, once again, emphasized that an AI chatbot is not a lawyer. Being a lawyer, or an approved agent of the lawyer, is the key component in a work-product analysis. Therefore, since AI is not a lawyer, the responses generated by AI do not reflect an attorney’s legal thinking or strategy and no protection is afforded.

The court’s message was straightforward: Traditional legal protections do not automatically apply when someone uses an AI tool to develop legal arguments or strategy.

For businesses and individuals using AI to resolve legal matters, especially in anticipation of litigation, this creates real risk. The holding in Heppner does not only apply to the criminal law arena. The corporate entities for which Heppner was an executive are being subpoenaed to produce records and entries showing whether additional employees were deploying AI tools to create false records or facilitate the creation of schemes intended to defraud investors at large.

As such, in the event you think or know for sure you need the services of an attorney, do not talk to AI. Contact a licensed attorney immediately.