Artificial intelligence (AI) systems can have damaging effects, especially on marginalised communities. We must build a movement for algorithmic justice
Algorithmic Justice League (AJL) founder Dr Joy Buolamwini
Words by Sasha Costanza-Chock
Artificial intelligence (AI) is currently an umbrella term for a number of distinct algorithmic systems. There is no self-aware general artificial intelligence, like we see in Hollywood films. Instead, these are socio-technical systems, platforms and tools, developed to perform tasks like automated scoring, recommendations, classifications, predictions, judgements and other evaluations of human attributes, characteristics or behaviour. They do this based on proxies derived from a variety of data sources and manipulated using computational approaches. So, these are tools that we can use to help us, but also to do harm.
Unfortunately, there are many ways that AI systems are currently being developed and deployed in ways that are unjust and cause harm, in particular to people from marginalised communities. For example, as a transgender person, I’m constantly misgendered by machines – I’ve written about how airport security systems constantly flag trans and other ‘non-normative’ bodies. More broadly, AI harms can take place when data is collected without consent; when datasets that reflect historical forms of discrimination are used to train automated decision systems; when AI tools are used to justify discriminatory treatment; and in many other ways.
The good news is that there is rapidly growing awareness of these problems. The bad news is that automated systems discriminate on a daily basis. For example, in the US, a widely used healthcare algorithm falsely concluded that Black patients were healthier than equally sick white patients. AI that is used to determine hiring decisions has been shown to amplify existing gender discrimination. Law enforcement agencies are rapidly adopting predictive policing and risk assessment technologies that reinforce patterns of unjust racial discrimination in the criminal justice system. AI systems shape the information we see on social media feeds, and can perpetuate disinformation and hate when they are optimised to prioritise attention-grabbing content. The examples are endless.
In 2016, Dr Joy Buolamwini founded the Algorithmic Justice League (AJL), which aims to create a world with more equitable and accountable AI. AJL has been growing ever since, and I recently came on board as AJL’s director of research and design. Coded bias and AI harms present some of the greatest civil rights challenges of our time, and the whole AJL team is passionate both about diversifying the STEM field and about challenging the unjust and harmful use of AI systems. Our mission is to raise public awareness about the impacts of AI, equip advocates with empirical research to bolster campaigns, build the voice and choice of the most impacted communities, and galvanise researchers, policymakers and industry practitioners to mitigate AI bias and harms.
AJL combines art and research to illuminate the social implications and harms of AI. We are also designing tools, projects and systems, and growing a community to work on these challenges together. We work to raise public awareness about the impact of AI, increase accountability in its use and reduce AI harms in society that often arise from coded bias. Joy likes to say that ‘If you have a face, you have a place in the conversation’ about coded bias and the AI systems that increasingly shape our lives.
There are a host of challenges to our mission, though. First, we have to raise awareness about what is going on. Too often, AI systems are deployed without any notification to those they will impact. Second, we need to connect people to concrete ways to take action, so that they don’t just end up overwhelmed by the magnitude of the problem. We are currently designing a platform for AI Harm Reporting, where people can come, share their experience of being harmed by an AI system and get connected to the resources they need to take action.
We need to organise designers and technologists into awareness of the unwitting harm they are committing, and learn together how to build better systems. We also need to pass new laws to provide oversight of AI systems, set up independent third-party evaluation of AI and ensure that there is accountability when people are harmed. We need to build a movement! Fighting for algorithmic justice takes all of us.
In terms of our approach to research and design, we are inspired by the Design Justice Network principles. The quickest way to sum this up is: ‘Nothing About Us Without Us.’ This is a phrase popularised by the disability justice movement that we try to follow at AJL, and we believe that designers, especially those who are working on designing technological systems, need to really put it into practice.
A lot of harm is being done by techno-solutionism, when well-meaning designers and technologists create tools, applications and systems that too often do unintentional harm to marginalised communities. Especially in AI systems, where we are training models to make decisions based on historical data sets, we need to pay attention to the ways that existing data is deeply biased, flawed or misleading, because of legacies of race, gender, class, disability and other forms of inequality.
But this idea doesn’t only apply to the design of AI systems – it applies to all kinds of design. This global movement towards egalitarian design is best encapsulated by the Index Award, the non-profit organisation which celebrates and scales designs that improves quality of life. We were honoured to be one of five winners this year, and will harness its platform and support to continue our work.
AI tools and systems can potentially be used for good, but we will always need to work hard to ensure that they are equitable and accountable. We need to establish clear guidelines and lines we will not cross – areas where it is unacceptable to use AI systems. We need to dramatically diversify the field. And we need much stronger oversight and accountability, including standards, laws, regulatory agencies and independent third-party auditors, in order to continuously monitor and evaluate AI systems to be sure that they are not causing harm – and if they do cause harm, to ensure that they are held accountable to mitigate and redress that harm.
Sasha Costanza-Chock PhD (they/she) is director of research and design at the Algorithmic Justice League and author of Design Justice: Community-Led Practices to Build the Worlds We Need (2020)
This article originally appeared in the Winter 2021 issue. You can read the digital magazine here