A practical approach for addressing bias in artificial intelligence

November 14, 2022

University of Pennsylvania Professor and U-M alum Desmond Patton says building more ethical technological systems starts with bringing more voices to the table.

A portrait of Desmond Patton with a city skyline as a backdrop
Credit: Columbia University/SAFE Lab

Over the past decade, there鈥檚 been no shortage of examples of human biases creeping into artificial intelligence processes. Back in 2020, Robert Williams, a Black Farmington Hills resident, was as a man shoplifting on security footage 鈥 a known weakness such systems have in accurately identifying people with darker skin. In 2019, researchers demonstrated that a software system widely used by hospitals to identify patient risks was for many types of care. A few years ago, Amazon mostly when it discovered it was consistently favoring men over women.

How human biases get baked into AI algorithms is a complicated phenomenon, one which we covered with UM-Dearborn Computer Science Assistant Professor Birhanu Eshete and then-UM-Dearborn Associate Professor Marouane Kessentini in a story last year. As noted in that piece, bias doesn鈥檛 just have one source, but bias problems are often rooted in the ways AI systems classify and interpret data. The power of most artificial intelligence systems rests in their ability to recognize patterns and put things into categories, but it鈥檚 important to note that process typically starts with a training period when they鈥檙e learning from us. For example, consider the image recognition algorithm that lets you find all the photos of cats on your phone. That system's intelligence began with a training period in which the algorithm analyzed known photos of cats that were selected by a human. Once the system had seen enough correct examples of cats, it acquired a new intelligence: an ability to generalize features essential to cat-ness, which allowed it to determine if a photo it had never seen before was a photo of a cat.

The important thing to note about the above example is that the algorithm's intelligence is fundamentally built on a foundation of human judgment calls. In this case, the key human judgment is an initial selection of photos that a person determined to be of cats, and in this way, the machine intelligence is embedded with our 鈥渂ias鈥 for what a cat looks like. Sorting cat photos is innocuous enough, and if the algorithm makes a mistake and thinks your dog looks more like a cat, it鈥檚 no big deal. But when you start asking AI to do more complex tasks, especially ones that are embedded with very consequential human concepts like race, sex and gender, the mistakes algorithms make are no longer harmless. If a facial recognition system has questionable accuracy identifying darker-skinned people because it鈥檚 been trained mostly on white faces, and somebody ends up getting wrongfully arrested because of that, it鈥檚 obviously a huge problem. Because of this, figuring out how to limit bias in our artificial intelligence tools, which are now used widely in banking, insurance, healthcare, hiring and law enforcement, is seen as one of the most crucial challenges facing AI engineers today.

University of Pennsylvania Professor and U-M School of Social Work alum Desmond Patton has been helping pioneer an interesting approach to tackling AI bias. At his recent lecture in our Thought Leaders speaker series, Patton argued that one of the biggest problems 鈥 and one that鈥檚 plenty addressable 鈥 is that we haven鈥檛 had all the relevant voices at the table when these technologies are developed and the key human judgments that shape them are being made. Historically, AI systems have been the domain of tech companies, data scientists and software engineers. And while that community possesses the technical skills needed to create AI systems, it doesn鈥檛 typically have the sociological expertise that can help protect systems against bias or call out uses that could harm people. Sociologists, social workers, psychologists, healthcare workers 鈥 they鈥檙e the experts on people. And since AI鈥檚 bias problem is both a technical and a human one, it only makes sense that the human experts and the technology experts should be working together.

Columbia University鈥檚 SAFE Lab, which Patton directs, is a fascinating example of what this can look like in practice. Their team is trying to create algorithmic systems that can use social media data to identify indicators of psycho-social phenomena like aggression, substance abuse, loss and grief 鈥 with the ultimate goal of being able to positively intervene in people鈥檚 lives. It鈥檚 an extremely complex artificial intelligence problem, and so they鈥檙e throwing a diverse team at it: social workers, computer scientists, computer vision experts, engineers, psychiatrists, nurses, young people and community members. One of the really interesting things they鈥檙e doing is using social workers and local residents to qualitatively annotate social media data so that the programmers who are building the algorithms have appropriate interpretations. For example, Patton says, one day, he got a call from one of their programmers over a concern that the system was flagging the N-word as an 鈥渁ggressive鈥 term. That might be an appropriate classification if they were studying white supremacist groups. But given that their communities of focus are Black and brown neighborhoods in big cities, the word was being used in a different way. Having that kind of knowledge of the context gave them a means to tweak the algorithm and make it better.

Patton says SAFE Lab鈥檚 work is also drawing on the hyper-local expertise of community members. 鈥淭he difference in how we approach this work has been situated in who we name as domain experts,鈥 Patton said. 鈥淲e [hire] young Black and brown youth from Chicago and New York City as research assistants in the lab, and we pay them like we pay graduate students.  They spend time helping us translate and interpret context. For example, street names and institutions have different meanings depending on context. You can鈥檛 just look at a street on the South Side of Chicago and be like, 鈥榯hat鈥檚 just a street.鈥 That street can also be an invisible boundary between two rival gangs or cliques. We wouldn鈥檛 know that unless we talked to folks.鈥

Patton thinks approaches like this could fundamentally transform artificial intelligence for the better. He also sees today as a pivotal moment of opportunity in AI鈥檚 history. If the internet as we know it does morph into something resembling the metaverse 鈥 an encompassing virtual reality-based space for work and social life 鈥 then we have a chance to learn from past mistakes and create an environment that鈥檚 more useful, equitable and joyful. But doing so will mean no longer seeing our technologies strictly as technical, but as human creations that require input from a fuller spectrum of humanity. It鈥檒l mean universities training programmers to think like sociologists in addition to being great coders. It鈥檒l take police departments and social workers finding meaningful ways to collaborate. And we鈥檒l have to create more opportunities for community members to work alongside academic experts like Patton and his SAFE Lab team. 鈥淚 think social work allows us to have a framework for how we can ask questions to begin processes for building ethical technical systems,鈥 Patton says. 鈥淲e need hyper-inclusive involvement of all community members 鈥 disrupting who gets to be at the table, who鈥檚 being educated, and how they鈥檙e being educated, if we鈥檙e actually going to fight bias.鈥

###

Want to dig deeper into this topic? Check out another recent installment in our Thought Leaders speakers series, where U-M Professor Scott Page explains why diverse teams outperform teams of like-minded experts.