DAHL Blog

Insights and resources for employers and professionals.

Artificial Intelligence (AI) in HR: The Ethics of AI in a Human-Driven Industry

The use of artificial intelligence (AI) is making waves in nearly every industry, some more than others. Many are worried about AI’s fast development and adoption, but integrating AI technologies into company processes and procedures is still primarily in the early exploratory phases, especially when it comes to human resources (HR). In fact, according to a Gartner, Inc. survey of HR leaders, only 5% have implemented generative AI, with 9% conducting pilots. There is still a long way to go for the full integration of AI into businesses. However, HR leaders should begin getting familiar with the possible applications of generative AI and other AI software for their organizations to prepare for the near future and beyond.

In this article, we will cover how AI will impact HR, the ethical concerns of AI, and what companies should do to prepare as these technologies continue to grow.

How Will AI Impact HR?

Generative AI is a valuable tool for HR professionals and job seekers alike, with various use cases. Due to the nature of HR, generative AI will never be able to replace human workers fully; HR is human-focused and, at its core, driven by human connection. However, AI will undoubtedly change multiple processes within HR as this technology continues to grow.

One of the primary ways that generative AI is already being used liberally is as a writing tool to make repetitive or tedious communications much more efficient. This includes drafting email communications, text messages, and job descriptions. Tasks like these may have required much more time before tools like ChatGPT existed, especially when preparing new content. Of course, recruiters and other HR professionals will still need to modify the content that generative AI produces. Nevertheless, with the extra time gained from using AI tools, they can focus more on forming closer connections with candidates and clients, ultimately providing more opportunities for business growth.

Another relevant application of AI is Talent Intelligence Software, defined by G2 as software that “employs AI frameworks or features to assist with talent acquisition or management processes.” This software can help with various procedures, such as “recruiting, candidate engagement, interview automation or transcription, upskilling, reskilling, skills management, and other talent acquisition or management tasks.” This powerful software is powered by machine learning (ML) to continually learn and apply customized solutions for specific requirements provided by companies to find optimized candidate matches. The benefits of this technology providing better matches benefits the client, the candidate, and the recruiter.

In addition to Talent Intelligence Software, Recruiting Automation Software is also becoming a popular application of AI technology. As defined by G2, “These tools assist HR personnel, hiring managers, and recruiters in creating qualified applicant pools for current and future openings. Recruiting automation solutions employ artificial intelligence to identify qualified candidates, verify email addresses and social profiles, and export full candidate profiles to the appropriate file or software application.” This technology can be fully integrated with ATS systems, making it a tremendous help to streamline the hiring process and monitor candidate pipelines. Both Talent Intelligence and Recruiting Automation Software have seen exponential growth since 2012, with growth rates of 188% and 526%, respectively.

Text analysis is another area of AI that will play a prominent role for HR professionals and recruiters as it can be used to cut down on time for many day-to-day tasks significantly. AI can efficiently synthesize large bodies of data and text much faster than humans, which can help review resumes and cover letters, match candidates, or recognize themes in performance management and employee engagement. In addition to saving time on the aforementioned tasks, AI software can arguably help to mitigate the unconscious biases of recruiters. However, this perk also prevents ethical concerns, which we will address in the next section.

What are the Ethical Concerns of AI in HR?

Replacing Humans

One of the most significant ethical concerns with AI is the loss of the human element and, consequently, the loss of human jobs. Although this technology can save time by automating many repetitive and time-consuming tasks, this potentially results in less demand for talent. Because HR is so human-focused, AI can never fully replace the meaningful connections that recruiters and HR professionals can provide; however, it can automate much of the work that these individuals complete.

Intellectual Property

Intellectual Property is another prevalent ethical concern for more than just the HR world. There is ambiguity around who owns the content that AI creates; therefore, issues arise around copyrighting content that businesses publish using AI. According to Bradford Newman, an attorney with Baker McKenzie in Palo Alto, Calif, there are a few areas where copyright infringement becomes a concern, including the following:

  • Potential for copyright infringement claims from 3rd parties (e.g., copyright holders of images used by generative AI).
  • AI output, even for company-owned generative AI tools, is a grey area for copyrighting capabilities, creating a few possibilities:
    • The company still owns the output
    • The company and toolmaker both own the output
    • The toolmaker owns the output

It is important to note that with these possibilities, it is critical for leadership and employees to be very cautious when entering susceptible information into any generative AI tools, given that the toolmaker is collecting and, “in many cases,” has the right to use any information that users enter.

Proprietary Information

Because AI toolmakers have access to the information users enter, this raises another ethical concern: the access that AI can potentially gain to a company’s proprietary information. Although AI can be helpful for both reasonable accommodations (such as closed captioning) and creating content for performance reviews, this poses a threat to company and data privacy if sensitive or proprietary information is being keyed into these platforms.

Candidate Screening

Earlier, we discussed the benefits of AI tools for candidate screening. However, if these tools are “trained” by someone with biases, this defeats the purpose of this technology. Additionally, because AI cannot use judgment in the same way a human can, it cannot determine on its own whether the decisions it makes are discriminatory. This can be especially concerning when it comes to protected class members, so employers are responsible for monitoring the screening and selection process.

What should companies do to prepare?

There are a variety of things that companies can do to prepare for the emergence of AI. Educating yourself on best practices, as well as connecting with AI, cybersecurity, and other HR industry professionals, can help the following tips to work seamlessly.

Explore and Make a Plan

Making a plan for how your company will use AI is important to have a clear and ethically responsible path forward. While the concerns around AI can be scary, knowledge is the best weapon to combat fear and ensure that your business is prepared. Simply exploring ChatGPT, reading the latest news on AI, or observing other companies’ plans and policies can be a great place to start. Once you have gained a better understanding of the breadth of AI tools and what you would like to implement for your business, you can begin creating plans for monitoring AI.

Monitoring AI Technologies

Human oversight of AI, no matter how your business decides to use it, is key to ensuring that AI and machine learning (ML) models are working as intended. This includes creating explainable and transparent models, conducting operational monitoring, and overseeing existing models for bias. Ultimately, humans should be the ones making the final decisions for any outputs that AI creates.

Companies such as ADP have created councils dedicated to AI data and ethics, made up of professionals that can provide guidance and insight to ensure that as AI continues to gain traction, they use these innovative tools in an “ethical, secure, and compliant” way.

Communication, Accountability, and Training

As you begin implementing AI in your company, clear communication and training about what these decisions mean for employees is critical to ensure that company data remains secure. Decision makers should communicate what tasks are appropriate to use AI for, such as job descriptions, written content, and text analysis. Employees should be made aware of the risks that may occur if guidelines are not followed.

If the changes you decide to implement may result in layoffs in your organization, honesty is the best approach. Although these decisions are never easy, maintaining humanity and decency during this time is critical.

In addition to internal transparency about rules, regulations, and potential job losses, external customers should also be made aware of when the content they are reading or interacting with is AI-generated. This is generally best practice, as a lack of transparency in this area could be frowned upon and have unintended negative consequences for your business.

Hopefully, after reading this article, you feel more prepared and informed about navigating the ethical concerns of AI usage in HR. It is important to keep up to date and begin exploring AI to stay ahead of the curve and keep up with competition in the industry.

Dahl Consulting offers the latest news and insights when it comes to employment-related industry trends so that your company can thrive in today’s ever-changing world. Learn more or reach out to get connected with our employment experts today!

Facebook
Twitter
LinkedIn
Email