A Federal Court Just Warned Employers:
AI Chats Can Become Evidence
In This Article
The Heppner lesson: AI chat logs can become evidence in court
Employee details pasted into an AI chat can end up as evidence. In U.S. v. Heppner, a federal court in New York rejected a claim that a CEO’s AI chats about his criminal case were protected by attorney-client privilege (memorandum dated February 17, 2026).
The FBI seized the CEO’s computer, and prosecutors sought access to his AI chat history. That fact pattern is extreme, but the takeaway for employers is not. If a manager drops a complaint summary, a performance narrative, or a termination rationale into a public AI tool, the chat itself can become a business record.
AI creates two immediate employer risks: discoverable chat records and ungoverned decision workflows. Everything below ties back to one of those two risks.
Featured Takeaway
In U.S. v. Heppner, a federal court held that public AI chats about legal issues were not privileged on those facts. That means employer AI conversations can become discoverable business records.
Privilege belongs with your lawyer, not inside a chatbot thread
Attorney-client privilege generally protects confidential communications between a client and their attorney for legal advice. A public AI tool is not your attorney. Employers should not assume an AI chat about legal issues is privileged.
The Heppner court left a narrow possibility, but it is not something HR should rely on. If an attorney directs AI research as part of the attorney’s own work product, there may be an argument for protection. That theory is narrow, fact-specific, and untested. HR should not rely on it.
Compliance Note
“Incognito” also gets misunderstood. Temporary or incognito modes may change what shows up in chat history or whether content is used for training, but they do not guarantee that no record exists. Some providers retain temporary chats for about 30 days, with exceptions. OpenAI says Temporary Chats are deleted within 30 days unless legal or security reasons require longer retention (Data Controls FAQ).
Six checks to keep AI use from becoming a records problem
6 Checks
California hiring risk: the employer still owns the decision
California’s approach is blunt: existing discrimination laws still apply when employers use AI in hiring, and employers cannot shift liability to the vendor or software. California’s Civil Rights Council regulations on automated-decision systems were approved in June 2025 and became effective October 1, 2025. In practice, that means AI hiring tools now sit inside existing FEHA discrimination and recordkeeping obligations, including the records needed to explain how screening, ranking, and selection decisions were made.
An automated decision system covers more tools than most employers expect. It covers tools that prioritize, rank, or filter candidates. That includes resume screening, assessments, interview-analysis tools, and “smart” ATS workflows.
Operator Insight
When we help teams clean this up, the fastest win is version control. Job descriptions, minimum qualifications, interview questions, scorecards, and rejection reasons need stable versions and named owners. If those inputs change every time someone prompts a tool, the hiring decision gets harder to defend later.
AI policy • HR records • Hiring controls
How VertiSource HR helps HR teams control AI use
Most employers do not need to ban AI. They need a clear process managers can follow and a way for HR to account for how AI was used in a given decision.
Deliverables: AI Use Matrix, approved-tools register, HR workflow map, filing rule, retention path, required human review step.
Policy Controls
AI workplace policy with defined boundaries
Without a written standard, every manager invents their own AI workflow. A short AI workplace policy draws the line between routine use and HR-only tasks, so employee-specific information stays in managed systems instead of chat threads.
Records and Retrieval
Approved-tools register and output filing path
An approved-tools list keeps usage visible. A filing path ensures that prompts, drafts, and final outputs land in a system HR can access during audits or disputes, not in a personal chat log.
Workflow Mapping
Workflow audit across recruiting and employee relations
A workflow audit shows each point where AI-generated content enters a staffing or employee relations decision, so your team knows where human review steps and filing rules belong.
Systems and Controls
System configuration and retention controls
Policy only works if systems support it. We tie filing rules to your existing HR platform (payroll, HRIS, time tracking) so final versions and decision records have a consistent home outside of chat history.
Request an AI workflow review
AI POLICY • HIRING • EMPLOYEE RELATIONS
Request an AI workflow review so we can identify where AI-generated content enters HR decisions, set up filing and retention paths, and build an AI Use Matrix your managers can actually follow.
Explore our HR servicesFrequently Asked Questions
AI prompts become records when employee-specific information goes in
If managers paste complaint summaries, performance narratives, or termination rationales into public AI tools, the chat itself can become a business record. VertiSource HR can map where AI touches recruiting and employee relations, define where prompts and final versions belong, and put a concrete set of controls in place: AI Use Matrix, approved-tools register, HR workflow map, filing rule, retention path, and a required human review step.
Ryan Joyce
Ryan writes about payroll operations, benefits compliance, HR technology, and the systems employers rely on when change puts pressure on the basics.
Disclaimer: This content is for general informational and educational purposes only and does not constitute legal, tax, accounting, or professional advice. Consult a qualified attorney or licensed advisor before making employment, payroll, or compliance decisions. VertiSource HR disclaims all liability for actions taken or not taken based on this material.

