It’s an unfortunate reality, but research provides evidence that employers (and many others) give preferential treatment to people they deem more physically attractive. Could the comparative objectivity of AI actually be a solution to Beauty Bias?
Lookism, The Beauty Bias and the Halo Effect
We’ve found time and time again that a candidate’s “qualifications” for a job do not represent all of the factors for why they are chosen or not. Employment discrimination remains a problem as perfectly qualified candidates might be passed over for gender, race, age and class (or simply because another candidate shares a similar background as the person hiring them).
But in an article for the Harvard Business Review, Tomas Chamorro-Premuzic notes one another important bias surrounding selection of candidates, which is lookism. Lookism means that candidates are favored or rejected based on physical attractiveness. Of course, this doesn’t just mean someone’s face and hair but extends to other criteria including tattoos, obesity and attire.And it doesn’t just stop there. Research supports the idea of a “Halo Effect” wherein people with positive qualities such as attractiveness are also assumed to be smarter, more confident, trustworthy and likeable, among other qualities. Here’s how that’s reflected in the studies cited by Chamorro-Premuzic:
- Higher grades: Due to the Halo Effect. Attractive students are deemed more conscientious and intelligent.
- More call-backs: attractive candidates got more than unattractive or no-photograph candidates.
- Higher salaries: 10-15% higher for above-average beauty in the States.
- Hiring and Firing: Less attractive employees less likely to be hired and more likely to be fired.
Can AI offer a solution?
With all these human biases that are both troubling for us as a society and potentially costly for industries (because good-looking people don’t necessarily mean productive or effective people), could a non-human help us?
We’re actually optimistic that AI could play a role, but with some very important caveats, especially with regard to that system’s training. Otherwise, we risk making the problem even worse. As Chamorro-Premuzic warns: “If we teach AI to imitate human preferences, it will not just replicate, but also augment and exacerbate human biases,” as we’ve seen in the past where discriminatory training data leads to discriminatory outcomes that impact people.
While some more infamous examples of AI used for hiring leave a lot to be desired, Chamorro-Premuzic, together with Frida Polli and Ben Dattner, point out in another article that AI is in many ways more accountable than we are because it’s easier to monitor an AI’s decision making and track its biases: “That’s why it’s easier to ensure that our data and training sets are unbiased than it is to change the behaviors of Sam or Sally, from whom we can neither remove bias nor extract a printout of the variables that influence their decisions.”
In short, AI can help to find the right people for the job more efficiently assuming it’s properly trained and used ethically by the hiring organization. Those factors considered, we wonder if it then becomes as simple as just not teaching AI to recognize certain physical attributes as attractive or not — or maybe teaching AI without any images at all (though that might be too simplistic).
But perhaps the biggest impact AI could have isn’t just solving the beauty bias, but the entire Halo Effect that bias generates. Chamorro-Premuzic suggests AI could function as “a diagnostic tool to predict someone’s likelihood of being deemed more effective in the business based on their perceived attractiveness.” Given that the Halo Effect unduly influences certain metrics that could throw off an AI (such as performance reviews, salary history etc.), accounting for someone’s physical attractiveness could help the hiring process a bit more.
The Takeaway
We’re only just scratching the surface of how the comparative teachability and objectivity of AI offers another solution we could defer to when we still fail to get things right. Still, a start-up might not have a mountain of applications to need hiring AI to sift through (and it might never), but that doesn’t mean those AIs and the problems they’re meant to solve won’t involve us later.
Lookism is easy to think of as benign compared to other discrimination based on other federally protected categories like race, sex, national origin or religion. After all “you got it, or you don’t,” right? But in giving this type of bias a free pass, we unwittingly permit similar prejudicial mechanics that draw false equivalencies between a person’s character or competence and a factor that they have little to no control over be it age, race, gender, or sexual orientation, for instance.
Even in the creative industries, where we might assume a “natural” respect for diversity and tolerance, we can still see instances where we’d assume the best or worst of someone based on their looks — the only difference is we might be cool with their tattoos (or lack thereof) but not the brands in their outfit.
In the past, many of our analyses covering AI have been of the cautionary type, where we warn about the dangers of AI threatening our way of life from its impact on the veracity of our media to its ability to displace us and other creative workers.
But we’d posit a more optimistic way for AI to be involved in shaping the lives of people being hired for massive companies where applications have to processed at scale. In this scenario, people with nearly identical qualifications and experiences are given a shot at a job without being rejected for something as arbitrary as a name, much less what they look like.