VertiSource HR | HRIS and HR Outsourcing

A Federal Court Just Warned Employers: AI Chats Can Become Evidence

Artificial Intelligence

A Federal Court Just Warned Employers:
AI Chats Can Become Evidence

March 11, 2026 | 5 min read | By Ryan Joyce • VertiSource HR

The Heppner lesson: AI chat logs can become evidence in court

Employee details pasted into an AI chat can end up as evidence. In U.S. v. Heppner, a federal court in New York rejected a claim that a CEO’s AI chats about his criminal case were protected by attorney-client privilege (memorandum dated February 17, 2026).

The FBI seized the CEO’s computer, and prosecutors sought access to his AI chat history. That fact pattern is extreme, but the takeaway for employers is not. If a manager drops a complaint summary, a performance narrative, or a termination rationale into a public AI tool, the chat itself can become a business record.

AI creates two immediate employer risks: discoverable chat records and ungoverned decision workflows. Everything below ties back to one of those two risks.

Featured Takeaway

In U.S. v. Heppner, a federal court held that public AI chats about legal issues were not privileged on those facts. That means employer AI conversations can become discoverable business records.

Privilege belongs with your lawyer, not inside a chatbot thread

Attorney-client privilege generally protects confidential communications between a client and their attorney for legal advice. A public AI tool is not your attorney. Employers should not assume an AI chat about legal issues is privileged.

The Heppner court left a narrow possibility, but it is not something HR should rely on. If an attorney directs AI research as part of the attorney’s own work product, there may be an argument for protection. That theory is narrow, fact-specific, and untested. HR should not rely on it.

Compliance Note

“Incognito” also gets misunderstood. Temporary or incognito modes may change what shows up in chat history or whether content is used for training, but they do not guarantee that no record exists. Some providers retain temporary chats for about 30 days, with exceptions. OpenAI says Temporary Chats are deleted within 30 days unless legal or security reasons require longer retention (Data Controls FAQ).

Six checks to keep AI use from becoming a records problem

6 Checks

1
AI Use Matrix (one page): Look for allowed, allowed-with-conditions, and prohibited by workflow.
Red flags: AI used to draft discipline or investigation notes without HR review.
2
Approved tools register: Look for tool name, owner, approved use case, SSO or login path, retention setting, data location, HR owner, legal hold method, and whether output can influence a hiring or ER decision.
Red flags: HR work done through personal emails.
3
Prompt redaction step: Look for a required remove-identifiers step before prompting (names, dates, pay rates, medical details).
Red flags: full complaint narratives pasted into prompts.
4
Output filing rule: Look for one designated home for every final draft (HRIS case note, employee file, or ATS record).
Red flags: the “final” only exists in chat history.
5
Human-in-the-loop hiring: Look for documented recruiter or manager decision points at each screening stage.
Red flags: software ranking becomes the decision.
6
Retention and legal hold: Look for a defined process to preserve prompts and outputs when a complaint or charge arrives.
Red flags: no process to collect AI outputs alongside other case documents.

California hiring risk: the employer still owns the decision

California’s approach is blunt: existing discrimination laws still apply when employers use AI in hiring, and employers cannot shift liability to the vendor or software. California’s Civil Rights Council regulations on automated-decision systems were approved in June 2025 and became effective October 1, 2025. In practice, that means AI hiring tools now sit inside existing FEHA discrimination and recordkeeping obligations, including the records needed to explain how screening, ranking, and selection decisions were made.

An automated decision system covers more tools than most employers expect. It covers tools that prioritize, rank, or filter candidates. That includes resume screening, assessments, interview-analysis tools, and “smart” ATS workflows.

Operator Insight

When we help teams clean this up, the fastest win is version control. Job descriptions, minimum qualifications, interview questions, scorecards, and rejection reasons need stable versions and named owners. If those inputs change every time someone prompts a tool, the hiring decision gets harder to defend later.

AI policy • HR records • Hiring controls

How VertiSource HR helps HR teams control AI use

Most employers do not need to ban AI. They need a clear process managers can follow and a way for HR to account for how AI was used in a given decision.

Deliverables: AI Use Matrix, approved-tools register, HR workflow map, filing rule, retention path, required human review step.

Policy Controls

AI workplace policy with defined boundaries

Without a written standard, every manager invents their own AI workflow. A short AI workplace policy draws the line between routine use and HR-only tasks, so employee-specific information stays in managed systems instead of chat threads.

Worth knowing: A filing rule matters as much as a usage rule when prompts and outputs become records.

Records and Retrieval

Approved-tools register and output filing path

An approved-tools list keeps usage visible. A filing path ensures that prompts, drafts, and final outputs land in a system HR can access during audits or disputes, not in a personal chat log.

Worth knowing: Personal accounts and personal devices make it harder to answer who typed what and where output went.

Workflow Mapping

Workflow audit across recruiting and employee relations

A workflow audit shows each point where AI-generated content enters a staffing or employee relations decision, so your team knows where human review steps and filing rules belong.

Worth knowing: The goal is a documented trail HR can audit after the fact.

Systems and Controls

System configuration and retention controls

Policy only works if systems support it. We tie filing rules to your existing HR platform (payroll, HRIS, time tracking) so final versions and decision records have a consistent home outside of chat history.

Worth knowing: Centralizing final versions reduces the risk that the “final” only exists in a chat thread.

Request an AI workflow review

AI POLICY • HIRING • EMPLOYEE RELATIONS

Request an AI workflow review so we can identify where AI-generated content enters HR decisions, set up filing and retention paths, and build an AI Use Matrix your managers can actually follow.

Explore our HR services

Frequently Asked Questions

Not by default. In U.S. v. Heppner, a federal court held that public AI chats about legal issues were not privileged on those facts. Employers should treat AI chats about employee or legal issues as potentially discoverable unless counsel directs otherwise through an approved process.
Yes. The Civil Rights Council regulations on automated-decision systems became effective October 1, 2025. Employers using AI tools that prioritize, rank, or filter candidates are already operating inside FEHA discrimination and recordkeeping obligations.
Treat AI prompts and outputs like other HR documents. File the final version in the HRIS or employee file. Preserve the prompt if it contained employee-specific facts. Include AI-generated drafts in your legal hold process when a complaint or charge arrives.
It depends on the workflow. If ChatGPT or a similar tool ranks, filters, or scores candidates, it may qualify as an automated decision system under California’s regulations. That should be evaluated under existing FEHA obligations, including documentation, adverse-impact review, and defined human decision points.

AI prompts become records when employee-specific information goes in

If managers paste complaint summaries, performance narratives, or termination rationales into public AI tools, the chat itself can become a business record. VertiSource HR can map where AI touches recruiting and employee relations, define where prompts and final versions belong, and put a concrete set of controls in place: AI Use Matrix, approved-tools register, HR workflow map, filing rule, retention path, and a required human review step.

Ryan Joyce, VertiSource HR author covering employee AI anxiety and retention

Ryan Joyce

Vice President of Client Partnerships, VertiSource HR

Ryan writes about payroll operations, benefits compliance, HR technology, and the systems employers rely on when change puts pressure on the basics.

Disclaimer: This content is for general informational and educational purposes only and does not constitute legal, tax, accounting, or professional advice. Consult a qualified attorney or licensed advisor before making employment, payroll, or compliance decisions. VertiSource HR disclaims all liability for actions taken or not taken based on this material.

Schedule a Call with Our Team