How to Make AI Equitable in the Global South

In 2021, Ishita Rustagi and I posed a question to machine learning experts and social change activists about how to progress “gender equitable AI.” By that, we mean AI that actively and deliberately works to advance gender equity and broader inclusion; it does this by correcting historical injustices in the creation and administration of AI tools as well as by preventing discrimination and placing an emphasis on inclusion first. Since most AI technologies are created in the Global North, without taking into account the inequalities between and within developing nations, the international development community must play a significant role in promoting and advancing strategies for more equitable AI.

The Equitable AI Challenge, which invested in creative ways to identify and address gender biases within AI systems, particularly in a global development context, was the first grant mechanism related to AI in international development, and it was made possible in part by the article that the USAID Innovation and Technology divisions read.

What have we learned about gender equitable AI with a focus on the Global South after three years, five completed projects, and one Equitable AI Community of Practice? And what part do practitioners of international development play?

1. There are ongoing gaps in data from Global South marginalized communities. It takes deliberate intention to get data that is more inclusive and equal. Differences in Internet and smartphone availability and use, which are more pronounced in many Global South nations, are the root cause of digital gender data gaps, which have an impact on what and from whom machine learning algorithms learn. To better understand gendered disparities in health queries and difficulties, the University of Lagos and Nivi collaborated to collect more health data in Nigeria as part of their project to improve the gender awareness of a health chatbot deployed in Nigeria. These efforts initially overrepresented urban dwellers, married women, and single men. The team had to carefully consider which parts of identification to focus on as they worked to consciously and gradually increase representation across various groups.

2. Comprehending the full extent to which gender disparities appear in data can be difficult. Working with the Ministry of Education of the Mexican State of Guanajuato, Itad discovered several potentially gendered characteristics that the algorithm may learn from to identify and reduce gender bias in Guanajuato’s new AI-based early alert system. The team conducted this analysis after finding a four percent gender bias in the tool. They also wanted to find out how gender might be unintentionally reflected in other variables. For instance, if attendance is taken into account as a factor for school dropout risk, girls might unintentionally be penalized for missing school because of their menstrual cycle.

3. The risk associated with gender “blindness.” In collaboration with RappiCard Mexico, researchers from the Universities of California, Berkeley, and Houston created a gender-differentiated credit scoring model (i.e., a female-specific credit scoring morel and a male-specific credit scoring model). In comparison to a pooled model, the team discovered that the female-specific credit scoring model increased loan approvals for women by eliminating some of the gender “blind” model’s bias without sacrificing predictive power. This example shows how gender-blind” algorithms have the potential to conceal or ignore preexisting disparities, unintentionally embedding them while also making it more difficult to determine whether discrimination is taking place. This is also true for other categories, such as race.

4. Efforts to create gender-equitable AI are complicated when data does not capture gender identity. AidData from the College of William & Mary assessed gender bias in artificial intelligence (AI) systems that estimate household wealth and guide welfare distribution in collaboration with the Ghana Center for Democratic Development (CDD-Ghana). To evaluate any potential gender bias in the household wealth estimates, the team had to determine the gender of the head of the household. However, a lot of these AI systems are trained on and utilize data from the current Demographic and Health Survey, which is anonymized, clustered, and does not contain gender information to safeguard the privacy of individual families. The team had to assign “household gender” using incomplete gender-based assumptions and additional data to perform the gender evaluation. As an additional example, Nivi both directly asked users about their gender and determined it based on their Facebook accounts. However, methods to support gendered health information were complicated because an individual’s self-identified gender on Facebook differed from that on the tool (possibly due to transgender or nonbinary people providing different answers in different places, or family members sharing accounts and devices).

5. Trust gaps result from a lack of transparency. People choose whether and how to act upon recommendations and predictions made by AI programs. Without much information available on how AI systems are internally operated, people may either not be able to trust the technology and thus not utilize it, or they may use the results without question. For instance, the Itad consortium observed issues with implementation when teachers received the outcomes of AI-generated predictions for school dropout risk, but they were ill-equipped to challenge or apply the results appropriately.

From here, whither should we go?

First things first: Equity isn’t the current quo. Because of this, equitable tools must be planned for both beforehand and during the process. Beginning with this expectation can serve as a guide for several issues, including who is on the team, who is absent, and who is over- or underrepresented in the data, in addition to more technical issues like data inputs, data documentation, algorithm development, and continuing management.

We advise international development professionals to follow these five essential steps:

To create representative datasets, work responsibly. While initiatives to promote digital inclusion that facilitate smartphone and Internet access might lessen digital inequalities and offer vast amounts of data to close gender data gaps, it is crucial to take privacy and safety concerns into account. Think about collaborating with neighborhood groups that may assist with data administration and gathering as well as identifying pertinent safety concerns. Practitioners of international development should look into data cooperatives and other systems that take ownership and power into account while developing datasets. Make sure individuals are able to choose their gender identity and other demographic details for themselves.

Engage the individuals who are going to use the technologies in a meaningful way, right away. Many AI tools and foundation models are created in the Global North, where there is frequently a shortage of data that represents varied cultures around the world and a greater comprehension of the situations in which they might be used. Hire team members from the regions where the tool is being used, and collaborate as co-creators on design, validation, and testing with the tool’s target users. Additionally, social scientists or gender experts working in teams can play a critical role in identifying and tracking gender-related gaps in AI system development.

Track and adjust algorithms for gender. Audit tools must comprehend performance across demographics, as it is not always evident how gender norms are mirrored in data and algorithms.
Put training and openness first to build trust. Make use of resources (e.g., model cards, dataset nutrition labels) that describe the contents of datasets or models, and incorporate steps to provide transparency for stakeholders who are not technical. Training stakeholders on responsible usage and administration of AI tool(s), ethical issues, implications for gender equity, and responsible deployment and management techniques are a few examples of what this can entail. Researching what it means to be transparent to diverse global groups with lesser digital literacy will require more resources and time.

Make risk evaluations and monitor the effects over time. Risk assessments can be used to investigate potential drawbacks, biases, privacy concerns, and transparency issues either before or after a product is developed. Evaluations may also cover the impacts of the tool, who is using it and how, as well as any concerns about fair access for various stakeholders or groups. As part of the exams, there may be continuing audits that evaluate performance across genders. Since they were primarily designed in and for Western cultures, the technical tools that support these audits are insufficient to fully comprehend the variety of potential harms. Additionally, they tend to focus more on narrower types of prejudice or fairness than on the more general ways that AI tools can advance discrimination.

More studies and initiatives examining equitable AI models and datasets are required, as are participatory procedures that give fairness, inclusion, and agency top priority for funders. Funders can prioritize community agencies and control in data-gathering operations, which can assist in closing data gaps. (The Data Empowerment Fund, an intriguing new funding instrument, seeks to do precisely that.) Giving priority to applicants who integrate risk assessments, promote equity, and offer financing support for tracking impacts and lessons learned is important when funding the development and implementation of AI models and applications. Funders, however, must also engage in fair procedures that highlight the agency of marginalized populations worldwide and genuinely place their needs, beliefs, and views at the center—as opposed to investing in a specific tool output. Funders can also assist in fostering a sense of community among scholars, practitioners, and legislators.

Gazing Forward

There is fantastic potential to make sure that these technologies are developed and used to advance gender parity as the use of AI grows. We cannot only assume that using AI “for good” would inevitably lead to the kind of fair outcomes we want. By doing this, the development industry would be reliving its previous mistakes with larger stakes. In this new era of generative AI, given the opacity and scope of AI tools, we are likely to face more dire consequences than the abandoned, corroding technologies that dot the history of the development business. In addition to seeking tools that are “less biased” than the current state of affairs, we as international development practitioners and funders need to hold ourselves to a higher standard and prioritize emancipatory and equity-centered tools.

We must resist giving in to the allure of “techno-solutionism” and believing that technological advancements alone can fix societal issues. Investing in the advancement or expansion of digital technologies, such as a specific AI technology, might be alluring. That is, in certain circumstances, a crucial step in addition to carrying out these suggestions and conducting a thorough analysis. We may, however, more responsibly achieve our desired outcomes by funding equitable co-creation processes that give different people’s and communities agency top priority and by remaining open to non-technical as well as technological solutions.

 

Leave a Comment