In November 2022, OpenAI—an artificial intelligence (“AI”) research laboratory funded by Microsoft—released its AI “chatbot” for public use: ChatGPT. Over the past several months, the popularity of ChatGPT has grown exponentially. Chatbots have become the topic of conversation everywhere, from the watercooler, to social media, to “60 Minutes” television episodes. As employees explore the seemingly endless positive and negative applications of ChatGPT, employers should consider the capabilities and risks of AI tools in the workplace.
What is ChatGPT?
At its core, ChatGPT is a language processing tool that is “trained” from information on the internet. Although based on AI, ChatGPT has many limitations because it relies on human feedback to improve its outputs and provide accurate answers to users. On March 14, 2023, Microsoft’s Corporate VP and Consumer Chief Marketing Officer Yusuf Mehdi announced that Bing—the company’s widely-used internet search engine—is now “running on GPT-4,” the fourth version of ChatGPT.
While AI technology has been advancing for years, free, user-friendly chatbots available to the public are novel. ChatGPT and similar tools can mimic human conversations, generate a vacation itinerary, draft a model workplace policy in seconds, instantly generate a string of computer code, or even draft email responses to coworkers and customers. According to a recent survey by Fishbowl of nearly 12,000 professionals, 43% of respondents admitted they use ChatGPT at work. Of those individuals, nearly 70% said their managers do not know about their work-related chatbot use.
On the other hand, many employers are actively seeking applicants with ChatGPT experience and knowledge of these emerging technologies. In an April survey of approximately 1,200 business leaders by ResumeBuilder.com, over 90% of the leaders who are currently hiring said they are looking for individuals with ChatGPT experience. While some employers are searching for so-called “prompt engineers” to help their organizations understand ChatGPT, other companies are taking a “wait and see” approach or prohibiting chatbot use altogether. Regardless of the approach, employers should be aware of several important legal risks and considerations.
Legal Considerations:
- Confidentiality: In its short life to date, ChatGPT has already resulted in users inadvertently disclosing businesses’ confidential information, trade secrets, and other proprietary data. On April 25, 2023, OpenAI announced its plans to roll out a “ChatGPT Business” subscription, which would ensure “end users’ data won’t be used to train [ChatGPT] models by default.” Unfortunately, it is still unclear the extent to which ChatGPT, chatbots in general, and other AI tools retain and rely upon confidential information entered by a user. For example, if an employee enters a string of proprietary code into ChatGPT and asks the chatbot to produce something similar, the chatbot could theoretically share the proprietary code in an answer to a competitor.
- Inaccurate Information: Because ChatGPT and other chatbots continue to rely on human feedback and information found on the internet, there is an inherent risk that users will receive inaccurate, incomplete, or biased information. If an employee shares inaccurate information with a client, customer, or colleague, that misinformation could be attributed directly to the employer.
- Discrimination in Employment Decisions: In January 2023, the Equal Employment Opportunity Commission (“EEOC”) held a public hearing regarding the use of AI in employment decisions. In its press release, the EEOC explained that “employers are using automated systems to make employment decisions, including the recruitment, hiring, monitoring, and firing of workers.” Although AI tools remove unconscious human bias from decision-making, AI tools may inadvertently discriminate against employees or applicants based on objective factors that can be explained by protected characteristics. For example, a work history gap on a resume may be due to a disability or veteran status rather than voluntary unemployment. On April 25, 2023, EEOC Chair Charlotte Burrows, Federal Trade Commission Chair Lina Khan, Director of the Consumer Financial Protection Bureau Rohit Chopra, and U.S. Department of Justice assistant attorney general Kristen Clarke spoke on a press call regarding AI discrimination, the impact of AI on existing federal laws, and new enforcement measures. In their joint statement, the agencies emphasized there will be “an increased focus and coordination amongst all of [them] to really tackle” challenges associated with AI in the workplace. Similarly, numerous state legislatures have enacted or introduced laws regulating the use of AI in employment-decisions.
Next Steps for Employers
Like any new tool or technology, there is no “one-size-fits-all” approach to drafting an employment policy that addresses employees’ use of ChatGPT or other AI tools. Similarly, employers must consider company-specific circumstances to determine whether they will rely on AI tools in their employment decisions or day-to-day business activities.
To address new and evolving AI technologies, employers should review their existing technology use policies, codes of conduct, and any confidentiality and trade secret agreements. Employers should consider updating these policies and agreements to cover emerging AI tools such as ChatGPT. Employers should also consider making clear that employees are prohibited from inputting confidential or proprietary information into chatbots such as ChatGPT. In addition, employers should consider setting reasonable boundaries for any acceptable ChatGPT use, including requiring employees who use ChatGPT for work-related projects to independently check all information and outputs. Finally, employers who allow the use of ChatGPT in the workplace should train employees on the proper use, potential benefits, and realistic risks associated with the chatbot.