Algorithmic Injustices: Towards a Relational Ethics with Abeba Birhane

EPISODE 348
LISTEN
Banner Image: Abeba Birhane - Podcast Interview

Join our list for notifications and early access to events

About this Episode

Generally, when we think of AI ethics we think of a few technical concepts -- explainability, transparency, data biases, etc. ⁠— but Abeba Birhane, a Ph.D. student at University College Dublin, says we need to change the way we think about the topic.

Specifically, she wants to shift our focus away from one that's fundamentally technology-first to one that reframes ethical questions from the perspective of the vulnerable communities our technologies put at risk.

Embodied Cognitive Science

Abeba comes from a background in cognitive science, more specifically from embodied cognitive science, which focuses on the social, cultural, and historical context of cognition. This integrative understanding of cog-sci opposes the more traditional approach which views cognition as "something located in the brain or something formalizable, something that can be computed." Embodied cog-sci accounts for ambiguities and contingencies instead of looking for clean boundaries.

AI Ethics and Disparities in Privilege and Control

The idea that technology is shaping our cognition in relation to the world is nothing new. Abeba cites a good example of this concept from The Extended Mind, by Andy Clark and David Chalmers, which suggests that the smartphone has become an extension of the human mind. But what Abeba emphasizes as most important are the disparities in how different groups of people are impacted by technology shifts, and the connection between privilege and control over that impact. AI is just the latest in a series of technological disruptions, and as Abeba notes, one with the potential to negatively impact disadvantaged groups in significant ways.

Harm of Categorization from ML Predictions

The inherent nature of so much of modern machine learning is to make predictions. An ethical approach to AI demands that we ask hard questions about those impacted by these predictions and assess the "harm of categorization." When an AI algorithm predicts that someone is more likely to be a criminal, less likely to be successful, or less qualified to receive credit, these predictions pose dangers that disproportionately impact disadvantaged populations versus those in more privileged positions.

Abeba's paper, Algorithmic Injustices Toward Relational Ethics, which recently won the best paper award at the Black in AI Workshop at NeurIPS, posits relational ethics and the relational mindset as a rethinking of those predictions. In other words, the question we should be asking is, why are certain demographics more at risk and how do we protect the welfare of those individuals most vulnerable to the social consequences of reductive labeling?

Her work also highlights that machine learning practices often rely on the assumption that the conditions that they model are stable. This comes from the IID assumption, which means that data points are independent and identically distributed. For example, you might be a certain way at work, but at a party, you speak or act differently. This "code-switching" is natural to humans but breaks ML algorithms' tendency to see one's actions as arising from a single distribution. For the most part, this dynamism is not something that ML sufficiently accounts for. As Abeba points out, the "nature of reality is that it is never stable... it is constantly changing." So, machine learning cannot be the final answer, it "cannot stabilize this continually moving nature of being." A relational ethics approach, however, accounts for change and assumes that solutions must be revised with time.

Robot Rights vs. Human Welfare

Abeba recently published another paper with her colleague Jelle van Dijk from the University of Twente, called "Robot Rights? Let's Talk about Human Welfare Instead." Like all good things, the paper came to life after a series of debates circling on Twitter and it comes down to two major concepts:

  1. Robots < humans. That is to say, robots cannot be granted or denied rights because machines are not the same as humans or any living being. Their argument rested on a "philosophical post-Cartesian approach" (translation: you need a brain to exist in the world because your being and knowing are sourced in the mind, which is embodied and enacted through a social environment. Robots arguably don't have conscious minds, nor do they have an embodied biological presence that constitutes existence as a "being" in the world around them).
  2. AI is not truly autonomous and never will be. This is because there is always a human involved to some degree. Another layer to this is the oversight of labor from "micro-workers" who contribute to AI without being acknowledged (like when you have to choose pictures of stop signs to prove you're not a bot).

Abeba and Jelle believe we've got way too many human-related issues with AI to worry about being nice to robots. This might be true, but as Sam jokingly points out, "People love their robots, I guess."

Reframing the AI Ethics Conversation

Throughout the interview and in her paper, Abeba raises a ton of compelling points in favor of reframing the AI Ethics conversation. How ML systems can account for these issues pragmatically is still a very tricky problem. For Abeba, "the best one can do [at the moment] is acknowledge this change in context and live this partial openness, and embrace reiteration and revision." Part of the process involves an active commitment to prioritizing understanding rather than prediction. But a shift in values might be a slow process.

Connect with Abeba
Read More

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *