Meta’s new AI council is made up entirely of white men

Meta on Wednesday announced the creation of an all-white male AI advisory board. What else would we expect? Women and people of color have been talking for decades about being ignored and excluded from the world of artificial intelligence despite being qualified and playing a key role in the evolution of this space.

Meta did not immediately respond to our request for comment on the diversity of the advisory board.

This new advisory board differs from Meta’s current board of directors and its Supervisory Board, which is more diverse in racial and gender representation. Shareholders did not elect this AI board, which also has no fiduciary duty. Meta told Bloomberg that the board would offer “ideas and recommendations on technological advancements, innovation and strategic growth opportunities.” It would meet “periodically.”

Tellingly, the AI ​​advisory board is made up entirely of entrepreneurs and entrepreneurs, not ethicists or anyone with deep academic or research training. While one could argue that current and former executives from Stripe, Shopify, and Microsoft are well positioned to oversee Meta’s AI product roadmap given the immense number of products they have brought to market, it has been proven time and time again time AI is not. like other products. It’s a risky business and the consequences of getting it wrong can be far-reaching, especially for marginalized groups.

In a recent interview with TechCrunch, Sarah Myers West, CEO of the AI ​​Now Institute, a nonprofit that studies the social implications of AI, said it is crucial to “critically examine” the institutions that produce AI to “make sure than the needs of the public.” [are] served.”

“This is an error-prone technology, and we know from independent research that those errors are not distributed equitably, but rather disproportionately harm communities that have long borne the brunt of discrimination,” he said. “We should set the bar much, much higher.”

Women are much more likely than men to experience the dark side of AI. Sensity AI found in 2019 that 96% of AI deepfake videos online were sexually explicit and non-consensual videos. Generative AI has become much more prevalent since then, and women continue to be the target of this violating behavior.

In a high-profile incident in January, Taylor Swift’s non-consensual pornographic deepfakes went viral on X, with one of the most shared posts receiving hundreds of thousands of likes and 45 million views. Historically, social platforms like X have failed to protect women from these circumstances, but because Taylor Swift is one of the most powerful women in the world,

But if this happens to you and you’re not a global pop sensation, you might be out of luck. There are numerous reports of middle school and high school students making explicit deepfakes of their classmates. While this technology has been around for a while, it has never been easier to access: you don’t need to be tech savvy to download apps that are specifically advertised to “undress” photos of women or change their faces into pornography. In fact, according to a report by NBC’s Kat Tenbarge, Facebook and Instagram hosted ads for an app called Perky AI, which described itself as a tool for creating explicit images.

Two of the ads, which reportedly escaped Meta’s detection until Tenbarge alerted the company to the problem, featured photos of celebrities Sabrina Carpenter and Jenna Ortega with their bodies blurred, urging customers to ask the app to remove them. They will remove the clothes. The ads used an image of Ortega of her when she was just sixteen years old.

The mistake of allowing Perky AI to advertise was not an isolated incident. Meta’s Oversight Board recently opened investigations into the company’s failure to handle reports of AI-generated sexually explicit content.

It is imperative that the voices of women and people of color are included in AI product innovation. These marginalized groups have long been excluded from the development of world-changing technologies and research, and the results have been disastrous.

A simple example is the fact that until the 1970s, women were excluded from clinical trials, which meant that entire fields of research were developed without understanding how they would affect women. Black people, in particular, see the impacts of technology created without them in mind; For example, self-driving cars are more likely to hit them because their sensors may have a harder time detecting black skin, according to a 2019 study by the Georgia Institute of Technology.

Algorithms trained on already discriminatory data only regurgitate the same biases that humans have trained them to adopt. Broadly speaking, we already see that AI systems perpetuate and amplify racial discrimination in employment, housing, and criminal justice. Voice assistants struggle to understand various accents and often flag the work of non-native English speakers as AI-generated since, as Axios noted, English is the AI’s native language. Facial recognition systems flag black people as potential matches for criminal suspects more often than white people.

The current development of AI embodies the same existing power structures around class, race, gender and Eurocentrism that we see elsewhere, and there seem to be not enough leaders addressing it. Rather, they are reinforcing it. Investors, founders, and tech leaders are so focused on moving fast and breaking things that they can’t seem to understand that generative AI (the hottest AI technology of the moment) could make problems worse, not better. According to a McKinsey report, AI could automate about half of all jobs that don’t require a four-year degree and pay more than $42,000 a year, jobs in which minority workers are overrepresented.

There is reason to worry about how an all-white male team at one of the world’s most prominent tech companies, engaged in this race to save the world using AI, could ever advise on products for all people when only a Narrow demographic segment is represented. It will take a huge effort to create technology that everyone, really everyone, can use. In fact, the layers needed to build safe and inclusive AI—from research to intersectional societal-level understanding—are so intricate that it’s almost obvious that this advisory board won’t help Meta get it right. At least where Meta falls short, another startup could emerge.

Leave a Comment