Weapons of Math Destruction in Hiring

wmdsAbout a month ago I read Dr. Cathy O’Neil‘s book, Weapons of Math Destruction. It is a call to arms for data scientists and anyone in the automation field to handle their data responsibly. She lays out a framework for a code of conduct for people designing systems that handle people’s data, and how to avoid or at least become aware of the harm they cause society. Below is a draft of a speech I’ve prepared to give for my Toastmaster’s group.

Imagining weapons of mass destruction is something of an American pastime. The dangers are obvious – millions dead and large-scale infrastructure overwhelmed or simply annihilated. These effects grip modern culture, seeping into irresponsibly speculative news and defining Michael Bay and Tom Cruise movies. Fun, but not a useful discussion to have in your daily life. Instead, I want to cover the automated systems already harming us – what Dr. Cathy O’Neil terms “Weapons of Math Destruction.” A Weapon of Math Destruction is any scalable system that uses a model with the potential to harm society. In her book, Weapons of Math Destruction, Dr. O’Neil defines three types of WMDs: invisible systems, opaque systems, and obviously harmful systems. I’m going to define those three types of WMDs, giving illustrative examples from her book and from my experience in the hiring automation field.

Invisible Systems

First, invisible systems. Who here has been given a personality test while being considered for a job? How many of those employers told you that your response could automatically disqualify you? That’s an example of an invisible system – you don’t even know it’s there. On the surface, sure, no one wants to work with jerks so just screen them out. The problem is two-fold: since you’re not aware of this, you can’t appeal the decision and don’t know what went wrong. The second is that “jerk” is not a well-defined term (and there aren’t any tests for it). In this case, employers misuse common personality tests (OCEAN, MBTI, etc.) for something they weren’t designed for: candidate filtering. In fact, these tests were designed to help teams work together by helping members understand what makes each individual tick.

This specific issue has a history dating back to discrimination against people with mental illnesses such as depression, PTSD, and anxiety. Unable to legally directly filter out people with mental illnesses, employers fall back to the poorly-correlated results of personality tests. Systems like these have obvious potential for abuse. Since they’re invisible, there is no public accountability and no way to correct the harm these practices cause.

Opaque Systems

But what if a system is visible, but the owner doesn’t want to reveal how it works? This is an opaque system. Opaque systems are very prevalent in software, especially in artificial intelligence development. There’s a lot of startups that promise automated systems that match candidates to job openings, aiding or even eliminating the role of recruiters. On the surface level this seems like a great idea – by making it easier for companies to hire people, it will be easier for people to get hired. What you’ll note is that none of these services reveal how they match candidates – it may be proprietary logic or a machine learning system that obfuscates the logic even from the company using it. Candidates who sign up for these systems know that there is a matching algorithm, but they aren’t let in on how it reasons about them. Since candidates don’t know about the strengths and limitations of these systems, they can’t tailor their resumes or profiles. This is worsened further since recruiters usually have limited understanding of the skills they’re hiring for, and can reason neither about the system they’re using or the skills they’re looking for. The system could be biased by race or gender, and the software’s developers may not even know.

By choosing to make their candidate matching system opaque, these services discriminate arbitrarily against candidates who haven’t optimized their profiles them recruiters, and there isn’t a way for candidates to learn how to improve their odds. The system is unappealable and outsiders can’t reason about how it behaves, so they are powerless.

Harmful Systems

But what if a system is visible and the implementers are transparent about how it works? You’re still not out of the woods. Many companies do a credit check before making a hiring decision. This system is visible and transparent – you know they are checking your credit history and that they have some minimum bar they’ll use to make a decision. Again, this initially seems like a good choice – if someone isn’t able to handle their finances responsibly, how can you expect them to be responsible with their job?

Financial irresponsibility isn’t the only way to end up with bad credit. Someone could steal your identity. A hospital might balance-bill you for tens of thousands over what your insurance covers. Maybe you’re still recovering from the financial crisis. But even if you are at fault for your financial history, systemically denying you a job will only make things worse for you and people like you. This is a simple feedback loop: people with bad credit get fewer and worse jobs, so their credit score gets worse. This is one of the many systems contributing to the cycle of poverty in the US.

Conclusion

The companies using and profiting on these systems – these WMDs – rarely look at their impact on the world and are unlikely to share if they do know. As citizens we have to be aware that these types of automated systems exist and influence much of our lives. Invisible and opaque systems are unaccountable, and we have to push back because usually we only learn they exist and hurt people when they’ve reached a monstrous size.

As the designers of systems, we have to make sure we aren’t falling into these pitfalls. If you’re making something with the potential to improve the lives of millions and have any sort of professional ethic, how can you live not knowing whether it is actually having that impact? Do you really take pride in your system if you’re not willing to let someone independently verify its effects?

These WMDs are already hurting you and the people around you. I’ve only given examples in hiring, but imagine the collective effect of thousands of these systems across every industry – real estate, medicine, finance – each impacting millions of lives. You probably participate in several, and may even be building one for work. Algorithms to automate systems will only become more prevalent with time. We have to be ready, and responsible.

Leave a Reply