Accelerating AI in an Unequal World – What Should We Do?
SIOBHAN MACKENZIE HALL. | OPINION PIECE
The Graduate Inequality Review, Volume II (July 2023)
The development of and pervasiveness of artificial intelligence (AI) is accelerating. AI is improving our lives. Medical diagnoses are becoming faster (Marr, 2020) and reach people in previously inaccessible areas, and self-driving cars promise a future without accidents and more independence for those living with disabilities (Mervis, 2017). Image and text generation can happen at the push of a button, facial recognition secures the secrets on our devices and AI-driven productivity tools are constantly being marketed. ChatGPT , a chatbot released at the end of 2022, shattered the user-uptake records of Instagram and Netflix (Hu, 2023) and has set new expectations for the potential of AI. With all these improvements we should surely be optimistic or is caution warranted?
I will argue in favour of the latter. Despite overwhelmingly positive applications even beyond those listed above, we need to stop and think: Who is reaping the benefits? In which contexts are these technologies being used and why ? For example, it was shown that a standard COVID triaging protocol in the US was leading to Black patients being disproportionately denied care, despite presenting similar clinical outcomes to white patients (Roy et al., 2021). Black people are more likely to be incorrectly flagged in automated crowd surveillance (Nadeem et al., 2022), as well as predicted to be more likely to re-offend compared to white offenders (Mulligan et al., 2019; Kantayya, 2020). Women are at a disadvantage with automated recruitment pipelines used in the hiring process (Dastin, 2018), and this disadvantage increases when we consider intersectionality (Buolamwini and Gebru, 2018). When we stop and consider that these technologies are not treating everyone in the same way (Berg et al., 2022; Weidinger et al., 2021; Manzini et al., 2019; Burns et al., 2018), and that this has tangible impacts on people’s lives, we see that there is reason for caution.
WHAT IS AI AND WHERE MIGHT UNFAIRNESS BE INTRODUCED?
Artificial intelligence (a term that can be used somewhat interchangeably with machine learning and learning algorithms) is neither an all-knowing, one-size-fits-all entity nor a solution. It is a collection of mathematical models, systems and algorithms deployed differently and individually to address a wide array of tasks in different environments. The systems learn from massive, static datasets that are essentially a moment frozen in time – capturing one moment in history, with all its ingrained injustices and proxy variables for diverse forms of discrimination (Birhane et al., 2021; Jia et al., 2021; Weidinger et al., 2021) AI largely operates as a black box, making interpretability an open challenge. Static datasets and the black-box nature of AI are likely responsible for unfair outcomes for marginalised groups.
WHAT FAIRNESS FRAMEWORKS EXIST?
Fairness is not a straightforward term and is context-dependent. What might be considered fair in one context (e.g., 50 % representation of one gender), might not apply in another context, e.g. for another protected attribute, or another setting (Mehrabi et al., 2019). This issue is confounded further when we consider the static and hardcoded nature of AI. Common ethical frameworks, such as virtue ethics, deontology or consequentialism provide baselines against which we can compare our behaviour and that of others. For example, consequentialism tells us that the net good for one action must be greater than the net good for another and may provide a baseline for technological fairness (Mulligan et al., 2019). However, these frameworks alone are insufficient when it comes to the acceleration of AI in our world. The framework outlines what happens when things go well in a human-based world. It was not theorised to account for the introduction of AI. This qualification is important when we consider that interaction between AI and humans and impacts on their lives are contingent upon the model’s training material, which is a frozen reflection of society with all attendant historical and contemporary injustices baked into the data. AI does not demonstrate the same flexibility as does human judgment and this means that the same groups are discriminated against repeatedly. Current AI systems are neither able to reflect nor explain their actions. This makes accountability difficult to enforce. Further, comparing AI systems to each other and determining how “fair” they are is non-trivial (Berg et al., 2022). Creating a unified testing system or standard that all models need to pass is challenging. It requires inventorying protected attributes and trying to set up experiments to test manifestations of bias stemming from AI model outputs, testing one protected attribute in isolation to avoid confounding factors. These experiments are set up to provide a solution to a pre-defined, rigid measurement of bias that may not reflect all the influences of other attributes and world context. This measure is often not tested out in real-world scenarios (Weidinger et al., 2021). In a paper aptly named “Lipstick on a Pig”, the authors (Gonen and Goldberg, 2019) show that in satisfying one metric, for example blinding the model to certain embeddings of protected attributes, so that it no longer associates a protected class with a context (such as ‘he’ and ‘she’ not being associated with occupation), a method is deemed successful. However, by slightly adjusting the method of analysis, and trying to access the protected attribute in a different way, the authors uncovered proxy variables still containing sensitive information, even when a model is made blind to the embeddings. This provides support for the notion that our current methods can only provide patchwork fixes on societal unfairness, which is deeply embedded into the data.
When we consider AI recommender systems favouring certain races and genders, and how this might impact stereotypical thinking, as well as downstream tasks such as resource allocation and job opportunities, it is clear that AI systems are widening gaps of inclusion and access for already disadvantaged groups. If we consider the consequentialist approach, there may arguably be a "net good" or a balance of more "good" outcomes than "bad", but we need to ask ourselves: who is reaping the benefit and why? Are these the people who happen to match the default parameters and settings--and thus AI is technically right a lot of the time? We must acknowledge that AI systems do not sufficiently account for the reality that most people do not match the encoded default that matches communities that are more represented, by virtue of their privilege. This means that already-marginalised groups continue to be marginalised and by virtue of being under-represented cannot counterbalance the number of times they are discriminated against.
Going forward in working towards a more equal future where AI benefits everyone, there needs to be a greater push for accountability or at least a better understanding of who should be liable when circumstances devolve disproportionately and consistently negative impacts to certain groups. Marginalised voices need a say, and many of us need to listen and then unflinchingly grapple with complexity in building frameworks of fairness and ultimately fairer models. There may not always be a neat, publishable solution. We need to lean into the messiness of science and resist many standardised measurements, while working together towards fixing underlying deficiencies in society – deficiencies which are reflected in AI systems. In this way, perhaps, we can achieve the utopia where AI benefits all of us, not just the encoded default.