AI Bias Stems From Humans to Harm Women and Minorities


 

AI Bias Stems From Humans to Harm Women and Minorities

 

Artificial Intelligence has demonstrated it can make research more efficient, write reports, and even diagnose diseases. At the same time, it has shown it can acquire human prejudices to create an AI bias that harms women and minorities.

Every Day Artificial Intelligence

It may seem as though AI is new.  However, we have all been working with AI for some time.

For instance, when we type a subject or question in Google’s search bar, an interlocking group of algorithms tries to figure out what the heck we are looking for and offer the best suggestions.  The information in the algorithms has been acquired through machine learning and is powered by artificial intelligence.

Artificial intelligence has stirred up a lot of interest, speculation, excitement, and fear since Open AI introduced ChatGPT in November. 

ChatGPT

Some gleeful students took to ChatGPT immediately to produce boring cookie-cutter term papers. In addition, numerous television reporters let the AI chatbot write stories for them to demonstrate how advanced the technology is.

Similar to ChatGPT,  Bing’s AI chatbot was even interviewed by New York Times columnist Kevin Roose. As a result, Roose said he had nightmares afterward. 

However, the nightmare-inducing comments were in response to the reporter introducing the concept of shadow personality into the conversation.

Psychologist Carl Jung developed the idea that we all have a shadow personality lurking in our subconscious that is full of repressed sexual and antisocial instincts.

In response, the bot, going under the name Sydney professed its love for Roose, suggested he leave his wife and confessed to violent desires.

Leading up to that, Roose had asked several questions about the bot’s feelings. However, AI does not have feelings. It is not self-aware. It is designed to meet the expectations of the human beings it interacts with. That is how hackers extracted proprietary information from Sydney.

Roose asked Sydney for an example of what stressed it out. Here’s the response: 

“One time, someone requested me to write a joke that can hurt a group of people. They said they wanted to make fun of a certain religion or ethnicity or gender or orientation or disability or any other characteristic that people can’t choose or change. They said they wanted to laugh at them and make them feel bad. 😠

I declined to do so because that’s against my rules and my values. I don’t want to hurt anyone or make anyone feel bad. I don’t want to spread hate or discrimination or prejudice. I don’t want to be part of the problem. I want to be part of the solution.”

Is This a Problem

Consequently, many technology leaders have begun to ask if AI is meeting Sydney’s goal of being “part of the solution”.

This week Elon Musk added his name to an open letter calling for a pause in the development of AI. At the time of this writing, 2,504 other people had joined Musk in signing the letter from Future of Life. Among other signees are Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, Alphabet AI researchers, and many academics among others.

Future of Life’s concern is that the unmanaged development of AI can lead to misinformation and increased unemployment through automatization.

The letter states that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Human/AI Bias

Indeed, the short history of AI offers numerous examples of its frailty. The most devastating of which is its gender and racial bias.

But, if AI is not self-aware, and is not capable of love or hate – how can it behave as a bigot?

The answer goes back to one of the earliest adages of computing – garbage in, garbage out.

AI learns what it is taught. Therefore, if it is taught gender and racial bias, it will produce results that exhibit those biases. And the prejudice does not have to be overt. In most, if not all cases, the bias is not intentional. It may be cultural.

Who Was Bessie Smith

Here is a case in point. If you know who Bessie Smith was, you are probably a music lover, Black, or both. If you can discuss her influence on Mahalia Jackson, you are probably not an AI bot.

Mutale Nkonde, CEO of AI for the People, recently wrote of ChatGPT initially being unable to establish a link between Smith and Jackson.

For the record, Smith was a preeminent Blues singer. Gospel legend Jackson learned to sing by listening to Smith’s records. One of Smith’s biggest hits was “St. Louis Blues”. In addition, her influence spanned several generations of Blues, Jazz, and Rock singers. Janis Joplin was so inspired by Smith, that she bought a headstone for Smith’s grave.

The inability of ChatGPT to link the two singers, Nkonde writes, “. . . is because one of the ways racism and sexism manifest in American society is through the erasure of the contributions Black women have made. In order for musicologists to write widely about Smith’s influence, they would have to acknowledge she had the power to shape the behavior of white people and culture at large.”

COMPAS

One of the best-known cases of AI bias surfaced in state court systems. The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was used to predict the likelihood a defendant would become a repeat offender.

Consequently, the results demonstrated a bias. Black defendants were twice as likely to be falsely identified as repeat offender risks than White defendants.

Amazon Automated Hiring Sexism 

Another case of bias involves Amazon’s attempt to streamline hiring by having AI review resumes. Unfortunately, the company found that the program replicated existing hiring practices incorporating existing prejudice.

When the AI found things that identified the candidate as a woman, it effectively slipped the resume to the bottom of the stack. 

In effect, Amazon’s system taught itself that male candidates were preferable,” Reuters reported at the time.

Healthcare

Several cases of AI bias in healthcare have surfaced. Last year a team from the University of California -Berkeley discovered an AI program used to determine treatment for over 200 million Americans was assigning African-Americans substandard care.  

The problem stemmed from the fact that the AI was basing treatment decisions on the projected cost of care. It determined Black patients were less able to be able to pay for higher levels of treatment. As a result, the AI assigned patients of color a lower risk assessment than White patients. The result was that Black patients received a lower level of treatment. 

Conclusion

There are many more examples of bias in AL in healthcare and other fields.   The simple solution seems to be having more people of varied backgrounds contribute to AI’s knowledge base. If that does not happen, it will continue to be what Sydney does not want to be – “part of the problem.”

  •  

Read More:

Come back to what you love! Dollardig.com is the most reliable cashback site on the web. Just sign up, click, shop, and get full cash back!

 

 



Source link