Seattle-based user experience consultant Torrey Podmajersky received a cold e-mail in January from a recruiter called Jennie Johnson. The message said Johnson had created a career profile for Podmajersky and suggested a few matching openings, but the profile was too broad and left out several important pieces of her professional history. Podmajersky, who hadn’t been job hunting in the first place, figured it was a scam.
It wasn’t until she scrolled to the bottom that she discovered Johnson wasn’t even a human. The e-mail had come from an artificial intelligence bot; the company behind it described it as an “AI representation of an elite career coach.” “[I felt] a sick feeling in the pit of my stomach,” Podmajersky recalls, “because I recognized that it’s designed to make someone feel seen, special, cared for, advocated for—and all of that is a lie.”
Douglas Hamilton, a retired sales and marketing executive in Albany, N.Y., also got a Jennie Johnson e-mail. “Wrong roles, wrong industry, wrong level,” Hamilton says of the bot’s job digest. He had opened the e-mail because the recruiter’s headshot, an unusual inclusion in hiring correspondence, caught his eye. That photograph, too, was artificially generated.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The Jennie Johnson bot, part of a tool developed by the generative AI company Hyperleap, has sent unsolicited e-mails to many professionals in the past few months. It’s just one example of how recruiting platforms, from small start-ups to LinkedIn, are increasingly using AI to find, filter and recruit new hires. As these tools become more sophisticated, experts worry they may make navigating the employment market even more dehumanizing for job seekers—and could reinforce existing biases.
Hyperleap, based in Park City, Utah, created Jennie Johnson as an online job search tool that sends automated job digests to users who sign up for the service. The company’s chief operating officer, Kevin Holbert, says this AI system scours online job boards and businesses’ websites to build a searchable database. It then combs through a person’s LinkedIn profile, résumé, and work history and matches this information to its database to recommend opportunities.
Although anyone can sign up for the service, people such as Podmajersky and Hamilton got their unsolicited e-mail during Hyperleap’s early campaigns to acquire new users in partnership with a jobs platform, according to Holbert. Anyone who registered on that platform received alerts from the Jennie Johnson bot as well, he says—though neither Podmajersky nor Hamilton recall signing up for any jobs platform around that time. The company has since “stopped this partnership,” Holbert says. “We’re continuing to clean out our list [and] removing any users who are not engaging.”
Services like Jennie Johnson are a natural expansion of automated technology that recruiters have been using for more than a decade. This includes applicant tracking software that screens résumés for keywords, says Regan Gross of the Society for Human Resource Management (SHRM), who advises human resources professionals on recruitment practices. More than 97 percent of Fortune 500 companies already rely on such tools to screen applicants. And a 2022 survey by SHRM found that 42 percent of surveyed organizations with 5,000 or more employees reported using automation or AI in HR-related activities, including recruitment and hiring.
Gross sees potential benefit in some of these tools but says they have their limits. For example, while AI applications can help with basic tasks such as summarizing an applicant’s skills, they cannot yet capture personality and unique traits that are important to recruiters. Gross predicts that widespread AI use will force job seekers “to find [new] ways to set themselves apart.”
AI applications are steadily automating more and more of the hiring process. For example, like Jennie Johnson, larger platforms such as LinkedIn can now automatically match candidates’ work experience and skills to available jobs. Some, including ZipRecruiter, also offer anthropomorphized chatbots that can respond to queries about a listed job, reach out to potential candidates and conduct personality assessments. New generative AI even lets employers outsource entire portions of the recruitment process to algorithms, including searching for candidates and conducting preprogrammed interviews.
Holbert says Hyperleap built a human persona for its AI to make it “more relatable” so that people would “act more natural.” Zahira Jaser, who researches organizational behavior at the University of Sussex Business School in England, says that a lack of clarity in an automated hiring process can have the opposite effect, however.
Jaser contends that uncertainty about whether one is dealing with a human affects candidates’ performance during the hiring process—and can also prevent recruiters from getting an accurate read on candidates’ normal behavior. In her research on AI-based interviews, Jaser has found that applicants often behave less naturally if they think they’re talking to an online bot instead of a human; they tend to keep a fixed gaze on the screen, for example, and avoid moving their hands.
Jaser’s research also indicates that AI-based interviews often penalize candidates who are first-generation graduates, speak with an accent or come from a less privileged background. Without enough human oversight, these AI-based recruitment systems can reinforce existing biases, Jaser adds. “Adopting these kinds of systems might be great from a cost-control perspective but, if unchecked, might create issues with diversity and inclusion,” she says.
Developers of AI recruitment tools have often said their algorithms have the potential to be more objective than humans—if these tools are trained on data that are inclusive and diverse enough. But such claims are misleading and show a lack of understanding about how bias operates in the real world, says Eleanor Drage, an AI ethics researcher at the University of Cambridge.
Because AI-based recruitment systems are trained on past hiring trends and practices, which have often been found to be biased and discriminatory, they will invariably repeat those same patterns by constructing associations between words—even if companies explicitly exclude factors such as race and gender, Drage says.
Though the hype surrounding these AI recruitment tools is high right now, many HR departments are waiting for them to prove their efficacy before adopting them, Jaser says. Hopefully, she adds, they won’t be adopted en masse unless or until they are proven to be useful, fair and reliable.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Scientific American – https://www.scientificamerican.com/article/ai-recruiters-have-joined-the-job-search-who-are-they-helping/
Unveiling 2024 Community Health Assessment: Join the Conversation and Collaborate for a Healthier Future!