AI IN THE WORKPLACE: WHAT HR NEEDS TO KNOW ABOUT POLICY, PRACTICAL USES, AND LEGAL RISK
Not a Future Trend, AI is Upon Us
Artificial Intelligence (AI) has quickly become part of the modern workplace, and HR professionals are now on the front lines of its impact. Employees are already using AI tools to draft emails, summarize documents, build presentations, and organize information. Businesses are exploring AI for recruiting, training, and workforce planning.
This rapid adoption means HR can no longer treat AI as a future trend. It is upon us. As a result, it is imperative that it be managed as an operational and compliance issue.
For HR teams, AI offers meaningful opportunities to increase efficiency and improve service delivery. AI can assist with drafting job descriptions, creating interview guides, developing onboarding materials, and building training content. It can help summarize employee survey results, organize policy updates, and generate first drafts of communications.
Used thoughtfully, AI can reduce the administrative workload of routine tasks and allow HR professionals to spend more time on strategic work such as workforce planning, employee relations, leadership coaching, and culture initiatives.



The Legal and Compliance Risks of AI in the Workplace
However, the same tools that create efficiency can also create risk if used without a clear structure.
Risk #1: Protecting Confidential Information
One of the most significant concerns for HR is confidentiality. Human resources departments handle highly sensitive information, including medical data, accommodation details, investigation notes, compensation information, and employee relations documentation. Entering this type of information into public AI platforms can create serious privacy and data security issues.
HR professionals must be especially cautious and should go to great lengths to ensure that confidential employee information is never shared with AI systems that are not explicitly approved and secured by the organization.
Risk #2: Discrimination in Hiring & Employment Decisions
AI can also raise important questions around discrimination and employment decision-making. Some organizations are experimenting with AI tools to screen resumes, analyze video interviews, or evaluate employee performance data. While these tools can appear efficient and objective, they may rely on underlying data that reflects historical bias. HR professionals must ensure that AI is never the final decision-maker in hiring, promotion, discipline, or termination decisions. Human review, documentation, and consistent application of job-related criteria remain essential to reducing legal risk.
In 2023, the U.S. Equal Employment Opportunity Commission (EEOC) settled a case against iTutor Group after it was discovered that the company’s AI software was automatically rejecting older job applicants. The AI System would screen out women aged 55 or older and men aged 60 or older. Applicants were rejected before any human review. The case resulted in a $365,000 settlement and required changes to the company’s hiring practices.
This case sent a strong message: employers are legally responsible for discrimination caused by AI tools, even if a vendor built the system.
Risk #3: AI Hallucinations and Compliance
Accuracy is another area where HR must lead with caution. AI tools can generate policies, legal explanations, and best practice guidance that sound credible but are often incomplete, outdated, or entirely incorrect. If HR relies on AI-generated information without verification, the organization could adopt noncompliant policies or communicate inaccurate information to employees.
AI should be treated as a drafting assistant rather than a source of authority, and all AI-generated HR content should be carefully reviewed before implementation or distribution.
In 2025, a federal district court sanctioned three lawyers from a national law firm for citing cases in their legal argument to the court that were entirely made up by AI. AI had confidently produced case citations that appeared legitimate, complete with case names, quotes, and legal reasoning. The problem, however, is that the cases did not exist. When the court asked for copies of the cited decisions, the attorneys could not produce them because the cases were fictional.
Another major area of concern regarding accuracy revolves around wage and hour compliance. If non-exempt HR staff or other employees use AI tools outside of scheduled hours to complete work more quickly, that time may still be compensable. HR should reinforce clear expectations around off-the-clock work and ensure that the use of productivity tools does not unintentionally create wage and hour violations.



How Can HR Minimize AI Risk?
Given these legal risks and ethical considerations, HR should take the lead in developing and implementing a clear AI acceptable use policy. This policy should outline appropriate workplace uses of AI, such as drafting, summarizing, and brainstorming with non-confidential information.
It should clearly prohibit entering sensitive employee, medical, or proprietary business data into unapproved AI systems. The policy should also reinforce that AI cannot be used to make final employment decisions and that human oversight is required whenever AI supports HR processes.
Beyond policy development, HR has an important educational role. HR professionals should partner with IT and legal teams to provide guidance to managers and employees on responsible AI use. Training should cover data privacy, bias awareness, documentation standards, and the importance of human review.
Creating a culture of transparency, where employees feel comfortable asking questions about AI tools, is far more effective than allowing unmonitored use to grow in the background.
Building a Compliant AI Strategy in HR
AI is not replacing HR, but it is changing how HR work gets done.
Organizations that empower HR to guide AI adoption thoughtfully will be better positioned to gain efficiency while maintaining compliance and employee trust. By setting expectations, providing training, and building guardrails now, HR professionals can ensure that AI becomes a valuable support tool rather than a source of legal and ethical risk.
By: Brian Lahargoue, Esq.
Mailing List Sign Up Form
Fill out this mailing list sign up form to receive monthly email updates on the latest NAE news, HR issues, special events, training dates and more!