Recently, on the NAE hotline, there has been an uptick in calls from our members regarding managers, supervisors, and employees using artificial intelligence (AI) to draft reports and other work.

A 2023 survey by Checkr reported that out of the 3,000 employees surveyed, 85% of them reported that they have used AI tools to perform tasks at work.

With all the advancements of AI, it is more tempting than ever to use AI as it promises to make life easier by simplifying work tasks. While there can be benefits to using AI, there can be real risks in having AI do the work for the employee.  

The Risk of AI Hallucinations

One of the big risks in using AI to draft reports and other work is that AI can be wrong. There is even a name given to this issue — AI hallucinations. AI hallucinations refer to when a generative AI model, like ChatGPT or Gemini, produces incorrect or misleading information as if it were factual, often due to insufficient training data or limitations in its algorithms. These errors occur when the AI attempts to create content beyond its understanding, leading to misleading outputs that may appear plausible but are fundamentally flawed. A recent New York Times article reported that AI hallucinations appear to be becoming more common, not less, and noted that in one test, the AI hallucination rates of new AI systems were as high as 79%.

AI Citing Cases That Don’t Exist

There have been multiple news reports recently regarding attorneys falling victim to AI hallucinations. In these reports, attorneys used AI to draft important legal documents, such as motions and legal briefs, which were then submitted to the court. When the court reviewed these documents, it determined that the AI used to draft the documents cited multiple fake cases to support arguments contained in these documents. The attorneys were unaware of the fake case citation because they had failed to properly review the documents before submitting them to court. The attorneys learned of the fake case citations when they were confronted by the court. Besides the embarrassment caused as a result of being called out by the court for using fake case citations, some attorneys also had sanctions imposed due to their failure to ensure that their case filings were accurate.

The legal field is not the only business that is affected by AI hallucinations. AI hallucinations could cause an AI-powered chatbot to provide incorrect instructions to a customer or an AI-generated financial report with inaccurate information, resulting in executives being misguided about the health of a business.

Guarding Against AI Hallucinations

Any employer who allows employees to use AI to assist with their work should be concerned about AI hallucinations. There are some steps that employers can take to protect against these embarrassing situations.

Checking For Accuracy

As you saw from the example pertaining to the legal field, the court took issue with the attorney’s failure to review documents and validate that the citations on the AI-created documents were correct. One of the main ways to avoid AI hallucination issues in the workplace is for employees to take the time to review any AI-created documentation and to validate the information included in that documentation. Employers should require all employees who use AI to assist with work to review and validate all information, especially if citations are used. If employees fail to do so, they should face disciplinary consequences.

Don’t Trust AI Overconfidence

Employees should also not be swayed by AI’s overconfidence. AI is going to be confident whether it is right or wrong, and typically, AI presents information in an authoritative tone. Employees should trust their instincts more than the confidence of AI.

Understanding the Limitations of AI Models

Employers should also be very familiar with the AI programs that are being used in the workplace, especially their limitations. Not every AI program is good at everything. Some can be lightning fast with numbers or computations, but may not be able to grasp the nuances of complex human emotions or cultural context. Knowing an AI program’s limitations will help employers avoid relying on AI for tasks that are outside of its limits.

Be Mindful of When AI is Used

Employers should also keep in mind that they can set parameters around when an employee is allowed to use AI to assist with drafting documents and other work. Employers may want to consider prohibiting employees from using AI, especially in situations where the documentation is a response to a state or governmental entity, when confidential or proprietary information is involved, or in other important situations where AI hallucinations could be especially detrimental or could cause additional liability for employers.  

Conclusion

While AI promises efficiency in the workplace, it also comes with risks. One of the biggest ones is AI hallucinations that result in incorrect or flat-out false information. Employers who allow employees to use AI to draft documents and other work need to make sure they have good policies and procedures in place to protect the company and to ensure that all work product created by AI only contains true and real information.

By: Cara Sheehan, Esq.