Computers have been after our jobs since 1950, when Mathematician Alan Turing published Computing Machinery and Intelligence, leading the way to what is now known as artificial intelligence [AI].
Panic over a human vs machine apocalypse reached an all-time high in 2017, with the release of articles like The New Yorker’s, Welcoming Our New Robot Overlords, recounting stories of mass lay-offs and factory closings.
Fast-forward a few years, however, and humans not only understand AI but depend on it for almost everything:
- Voice assistants: Every time you interact with Siri, Cortana, Alexa, or a chatbot, you’re engaging with AI.
- Translation: Language, objects, pictures, and sounds are all translated into data used in algorithms that guide your online experiences.
- Predictive systems: The military, investment firms, hospitals, meteorologists – any industry that values predictions – uses AI to form conclusions by analyzing statistical data.
- Marketing: Analyzing buyer behavior and targeting specific markets is all done using AI. Now there’s even cross-over use with voice assistant technology.
So, while there’s little doubt AI makes our lives easier, it also begs the question: If AI attempts to gain and react to data like a human, does that also mean it’s fallible like a human? When it comes to employment screening, the answer is, yes.
AI in Employment Screening
Using AI to conduct background checks and other forms of employment screening removes all human analysis. Instead, the software collects data from thousands of sources then uses algorithms to compile bits of random information into meaningful insights. Within five minutes AI identifies, evaluates, and rates risk before putting a value on a prospective employee’s worth.
When AI Gets it Wrong
Imagine applying for a job delivering food, only to be rejected because your background check revealed you were convicted of causing a fatal crash while driving drunk – except you never were!
It happened to a man whose criminal background check was conducted using AI with no human oversight. He sued the company that conducted the search, and he’s not alone. From low-level offenses like traffic infractions to serious felonies involving drug possession and assault, screening companies relying solely on AI are finding their own records marred by court cases and judgments for damages paid to people who were wrongly accused of crimes – on their background checks – and claim they lost employment opportunities because of it.
The Human Touch
When AI conducts a background check, it uses web scraping technology to gather data from websites based on certain criteria like name or birthdate. It then compiles the information and populates a report which is sent to an employer – who trusts this information – and uses it as a basis for their hiring decisions.
It takes a human being to ensure that the compiled information truly matches that of the prospective employee. Additionally, when numerous checks are run on the same criteria, AI often returns reports with different information. Only a real-person can tell if any or all of the information belongs to the same person, and if that person is the same one applying for the job.
Yes, artificial intelligence is amazing technology with more potential than the human mind can currently fathom. Whether it’s predicting a hurricane or populating your Facebook feed with the latest political news, (tailored to your leanings), AI can make us safer and life more pleasurable and convenient. What AI can’t do, however, is tell the difference between John Smith the philanthropist and John Smith the pedophile – that takes a company understanding there’s a person behind the data and valuing accuracy over profit at the expense of someone’s livelihood.