A.I. (Artificial Intelligence) is not the Problem: We Need More Diverse and Inclusive Humans in the Tech Sector

Irma McClaurin
4 min readOct 20, 2023

--

Black and white toy robot on red wooden table. Photo by Andrea De Santis on Unsplash

In a recent #LinkedIn post about an #NPR story by @Carmen Drahl, “AI was asked to create images of Black African docs treating white kids. How’s it go?” , my #blackanthropology colleague, Dr. David Simmons, reminded us of the real danger of #AI; he notes:

…AI still relies on humans — complete with their biases and assumptions, both implicit and explicit. Let’s work towards creating AI systems that are more inclusive.

Dr. Arsenii Alenichev, from Oxford University, is the researcher Drahl wrote about; he has demonstrated that we should not fear an #artificialintelligence that will take over or ruin the human world. However, we do need to fear how the coding and input of data into AIs are done by human beings who come already socialized with cultural biases!

…[Alenichev’s] goal was to see if AI would come up with images that flip the stereotype of “white saviors or the suffering Black kids,” he says. “We wanted to invert your typical global health tropes.”

…They realized AI did fine at providing on-point images if asked to show either Black African doctors or white suffering children. It was the combination of those two requests that was problematic.

Racial and Gender Inequality in Silicon Valley

So, a computerized intelligence cannot imagine or configure anything beyond what it’s programmer can. If the white programmer cannot conceive of a Black doctor helping suffering white children, that bias is coded into the AI. It is only as smart as the person (s) who initially coded and input the data.

According to the research on technology and inequality by UC Santa Barbara sociologist and ethnographer Dr. France Winddance Twine, we probably shouldn’t hold our breath for an inclusive AI, as Simmons requested — ain’t gonna happen.

Winddance documents in her latest book, Geek Girls: Inequality and Opportunity in Silicon Valley, how implicit and explicit racial biases and gender inequality abound in Silicon Valley! She concludes the book with this prediction:

The technology sector is unjust and not yet a vehicle for economic justice and social mobility for everyone.

What’s an AI to do?

So, what’s an AI to do? We know that artificial intelligence is not autonomous. It can not create anything — at least not at this moment in time — outside of its existing stored data.

It can reconfigure and make up “facts” — also plagiarize and create false data by linking things together and stealing content from human researcher and writers as Matt Novak points out in a May 2023 Forbes article about the new Google search engine: “Google’s New AI-Powered Search Is A Beautiful Plagiarism Machine.”

At this moment in time, AI does not have its own scarecrow brain; it simply mimics and expands upon its existing program.

Photo by Mateusz Raczynski on Unsplash

It is true, if we believe Isaac Asimov in his Robot science fiction series, and the Will Smith movie “I, Robot,” A.I. could become a supercomputer, but it cannot, as Azmera Hammouri-Davis,MTS calls it — #breaktheboxes of its human programmer.

Our greatest fear should not be of an autonomous A.I. — like Hal the Computer in Space Odyssey — though A.I.s are destined to create massive unemployment for laborers who are unskilled in use of technology.

Our greatest fear MUST be of AIs that are supercoded with #whitesupremacy ideology and #genderinequality data.

This is not new stuff. Since 2018, groups like the Critical Code Studies Working Group at the University of Southern Califorinia, now called “The Humanities and Critical Code Lab” (HaCCS), led by Mark Merino, have been looking at issues of inequality in coding for several years.

Indeed, I discovered this fact some years ago — before Google began reading critiques of its coding practices— if you typed #blackbeauty into the google search engine, all that appeared were images of horses, like in the movie Black Beauty.

Conversely, if you typed in #beauty, only images of #whitewomen appeared. Since then, Google has updated images in searches connected to these words, but whiteness still prevails.

These are just a few of the known biases coded into #AI historically — and this latest research experiment by Dr. Alenichev proves that racial stereotypes are still prevalent, such that in the coded minds of AI, all the suffering children are Black and Nonwhite and all the medical doctor saviors are white.

In the case of Black doctors treating white suffering children, such biases and assumptions against this possibility are rooted in #whitesupremacy ideology and beliefs; they are part of the tacit knowledge of most white people socialized in America, and Europeans globally.

These human beliefs and biases will not change/cannot change until medical schools are more diverse, and Silicon Valley becomes equal, ungendered, diverse, equitable, and inclusive!

It is not the AI that needs a #DEI reboot, but the human beings that code them!

Don’t hold your breath, however; the current climate of anti-CRT and attempts to whitewash America history and negate the history of hundreds of years of human enslavement , suffering, and ongoing Black and Indigenous generational trauma, disparities, and inequality suggests little hope for change towards a more socially and racially intelligent AI, based on the current state of Silicon Valley and its human coding professionals.

--

--

Irma McClaurin
Irma McClaurin

Written by Irma McClaurin

Award-winning author/ anthropologist/consultant & past prez of Shaw U. Forthcoming: JUSTSPEAK: Race, Culture & Politics in America: https://linktr.ee/dr.irma

No responses yet