Women in AI: Chinasa T. Okolo investigates the impact of AI in the Global South

To give female academics and others focused on AI their well-deserved (and long-awaited) time in the spotlight, TechCrunch has been publishing a series of interviews focused on notable women who have contributed to the AI ​​revolution. We will publish these articles throughout the year as the rise of AI continues, highlighting key work that often goes unnoticed. Read more profiles here.

Chinasa T. Okolo is a fellow in the Brookings Institution’s Governance Studies program at the Center for Technology Innovation. Prior to that, she was part of the ethics and social impact committee that helped develop Nigeria’s National Artificial Intelligence Strategy and served as an AI ethics and policy advisor to several organizations, including the African Union Development Agency and the Quebec Institute of Artificial Intelligence. She recently received a PhD in computer science from Cornell University, where she researched how AI impacts the Global South.

Briefly, how did you get started in AI? What attracted you to the field?

I initially transitioned to AI because I saw how computational techniques could advance biomedical research and democratize access to healthcare for marginalized communities. During my last year of undergraduate [at Pomona College], I began researching with a human-computer interaction professor, which exposed me to the challenges of bias within AI. During my PhD, I became interested in understanding how these issues would affect people in the Global South, who represent the majority of the world’s population and are often excluded or underrepresented in AI development.

What work are you most proud of (in the field of AI)?

I am incredibly proud of my work with the African Union (AU) in developing the AU-AI Continental Strategy for Africa, which aims to help AU member states prepare for the responsible adoption, development and governance of the AI. The strategy took more than a year and a half to draft and was published at the end of February 2024. It is now in an open feedback period with the aim of being formally adopted by AU member states in early 2025.

As a first-generation Nigerian-American who grew up in Kansas City, MO, and did not leave the United States until studying abroad during undergrad, I always wanted to focus my career on Africa. Engaging in such impactful work so early in my career makes me excited to pursue similar opportunities to help shape global and inclusive governance of AI.

How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

Finding a community with those who share my values ​​has been essential to navigating the male-dominated technology and AI industries.

I have been fortunate to see many advances in responsible AI and notable research exposing the harms of AI led by Black scholars such as Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Deb Raji, many of whom I have been able to connect in recent years.

Seeing their leadership has motivated me to continue my work in this field and has shown me the value of going “against the grain” to make a significant impact.

What advice would you give to women looking to enter the field of AI?

Don’t be intimidated by a lack of technical experience. The field of AI is multidimensional and requires experience in various areas. My research has been strongly influenced by sociologists, anthropologists, cognitive scientists, philosophers, and others within the humanities and social sciences.

What are some of the most pressing issues facing AI as it evolves?

One of the most prominent issues will be to improve the equitable representation of non-Western cultures in prominent languages ​​and multimodal models. The vast majority of AI models are trained in English and with data that primarily represents Western contexts, leaving out valuable insights from much of the world.

Furthermore, the race towards building larger models will lead to further depletion of natural resources and greater impacts of climate change, which already disproportionately affect countries in the Global South.

What are some of the issues that AI users should consider?

A significant number of AI tools and systems that have been publicly deployed overstate their capabilities and simply do not work. Many tasks that people intend to use AI for could probably be solved by simpler algorithms or basic automation.

Additionally, generative AI has the ability to exacerbate harms seen in previous AI tools. For years, we have seen these tools exhibit bias and lead to harmful decision-making against vulnerable communities, which will likely increase as generative AI grows in scale and scope.

However, allowing knowledgeable people to understand the limitations of AI can help improve the adoption and responsible use of these tools. Improving artificial intelligence and data literacy among the general public will be critical as artificial intelligence tools are rapidly integrated into society.

What’s the best way to build AI responsibly?

The best way to develop AI responsibly is to be critical of the intended and unintended use cases of these tools. People building AI systems have a responsibility to oppose AI being used in harmful scenarios in warfare and policing and should seek external guidance whether AI is appropriate for other use cases they may be targeting. Given that AI is often an amplifier of existing social inequalities, it is also imperative that developers and researchers be cautious when creating and selecting data sets used to train AI models.

How can investors better drive responsible AI?
Many argue that the growing interest of venture capitalists in “cashing in” on the current wave of AI has accelerated the rise of “AI snake oil,” coined by Arvind Narayanan and Sayash Kapoor. I agree with this sentiment and believe that investors must take leadership positions, alongside academics, civil society stakeholders, and industry members, to advocate for the responsible development of AI. As an angel investor, I have seen many dubious AI tools on the market. Investors should also invest in AI expertise to vet companies and request external audits of the tools demonstrated in presentations.

Anything else you would like to add?

This ongoing “AI summer” has led to a proliferation of “AI experts” who often downplay important conversations about the current risks and harms of AI and present misleading information about the capabilities of AI-enabled tools. AI. I encourage those interested in educating themselves about AI to be critical of these voices and seek out reliable sources from which to learn.

Leave a Comment