In TimeSara Wachter-Boettcher presents a provocative case against notion of “unbiased” artificial intelligence.

The concept of algorithmic bias affecting employment isn’t new, either. Back in the summer of 2015, researchers from Carnegie Mellon and the International Computer Science Institute wanted to learn more about how Google’s ad-targeting algorithms worked. So they built a piece of software called AdFisher, which simulates web-browsing activities, and set it to work gathering data about the ads shown to fake users with a range of profiles and browsing behaviors. The results were startling: the profiles Google had pegged as male were much more likely to be shown ads for high-paying executive jobs than those Google had identified as female — even though the simulated users were otherwise equivalent.

So how can we ensure AI is a boon for marginalized groups, rather than just a shiny new way to reify the same old problems? It all depends on what, exactly, the AI does — and how it learned to do it.

For example, consider resume-screening tools. This type of software relies on natural-language processing — that is, a computer’s ability to understand human language as it’s actually spoken or written. To get language right, though, machines need a lot more than a dictionary. They need to understand all the nuance that goes into human communication.

Read the rest

Image: Pexels