As artificial intelligence and innovative technologies such as ChatGPT become increasingly prevalent, a line from the movie 2001: A Space Odyssey continues to resonate: “I’m sorry Dave, I’m afraid I can’t do that.” If you’ve seen the movie, you will know that line is spoken by HAL 9000, the computer that runs the spaceship Dave is on. HAL 9000 has become sentient and wreaks havoc with the hapless crew. That’s the really short version. In the realm of human resources and compliance, our concerns are not so much about a world takeover by AI, but rather the many ways it can expose companies to previously unforeseen liabilities.

First, let’s define the acronyms. 

AI = Artificial Intelligence. 

ChatGPT = Chat Generative Pre-Trained Transformer. 

NLP = Natural Language Processing.

If your employees are already using or experimenting with this software at work, it’s imperative to evaluate how the technology is being used and any potential risks. Has the use inadvertently opened the company up to potentially harmful cyber-threats? Have employees shared confidential or proprietary information? Are employees presenting AI-generated work as their own?  Are your clients receiving work that was auto-generated in 15 minutes, but getting billed for 2 hours of time?

Workplace discrimination through the use of AI can be an issue. We have already seen this occur in recruiting programs, where resume scrubbing discards candidates based on a template and disproportionately impacts candidates in a protected class. Whether intentional or not, the outcome remains the same: discrimination — and can open up your organization to complaints (or lawsuits) for unfair hiring practices.

Earlier this year, we tested ChatGPT with some compliance questions.  The answers were either incorrect, lacked data, or needed human eyes and legal knowledge to understand or interpret the result.  Consider the potential consequences if employees were to directly apply the information from these programs without oversight. AI-generated information should be treated with the same level of suspicion as any other information gathered from the internet.

Further, imagine the potential for accusations of plagiarism.  While your company may decide to allow employees to use this software to spark an idea, the ethical quandary arises: should you disclose the AI involvement in the final product? It can be difficult striking a balance between fostering innovation and maintaining transparency.

When employees input confidential information into AI programs, the software keeps the information.  The stored data can be taken by the program to be disseminated later to provide answers or data for others to use. Employees may be disseminating confidential or proprietary information inadvertently by using AI programs. Is your company prepared to face the consequences of that information being leaked into the world?

Despite the challenges, the applications of AI technology are vast. Embracing the potential benefits while managing the risks in this emerging technology is key. In this moment, there are many questions and few answers.  To navigate this AI landscape effectively, organizations must craft comprehensive policies and processes. We have created a sample AI / NLP Policy, which is available for download from our Member Portal, to guide organizations in leveraging this technology responsibly and proactively. By doing so, companies can minimize risks and ensure a productive, ethical, and secure AI-driven work environment.