Bold Business Logo

The Perils of AI with Programmed Bias

the Google Gemini controversy with threads of light

Google found itself in a bit of a mess recently as a result of its generative AI tool, Gemini. Users reported on Twitter/X that images created using Gemini imposed serious bias on results. But the direction of these biases wasn’t necessarily one that most would expect. Rather than depicting race and gender in a biased manner that reflects human history, the opposite occurred. In other words, traditional white and male roles were portrayed as racial minorities and female. This might sound like a good thing, but it instead fueled a Google Gemini controversy that the company had to address. And to a larger extent, raised an AI image controversy that all generative AI companies must consider.

a dude solving the Google Gemini controversy
A programmed bias is behind the Google Gemini controversy–how do you fix that?

(Generative AI has been fed a steady diet of Internet nonsense–read more in this Bold story.)

Given that generative AI is trained on real-world data, perhaps this AI image controversy shouldn’t be a surprise. But for many who foresee AI as the future, there is a concern that such biases need to be eliminated. In order to achieve this, however, AI companies like Google must add guiderails and protections. This is naturally easier said than done as the current Google Gemini controversy shows. But the bigger question is what kind of AI we want as we move into this unknown future. Do we want one that is idealistic, or one that actually depicts the world as it is? This is a key question, and as with most things, context certainly matters. And this is what Google as well as every other AI developer is trying to determine.

The Google Gemini Controversy

While AI image controversy debates are not new, a recent Google misstep has brought these back to the forefront. Using the Gemini image creation tool, a user on X was able to construct AI-generated images based on text prompts. In this instance, prompts were more historical in nature, requesting depictions of the pope, America’s founding fathers, etc. The results provided by Gemini were a bit surprising. The pope was portrayed as a woman in some cases and as an ethnic minority in others. Likewise, founding fathers were similar racially diverse with most being African American. Even some texts written by Elon Musk were equated to comments similar to Hitler. Given that these images were incongruent with historical reality, naturally Gemini was claimed to have inherent bias.

Unlike other claims of bias and discrimination, the Google Gemini controversy seemed to err on the less conservative side. People accused Google once again of being liberally biased in its AI training methods. Its supposedly anti-conservative bias had fed Gemini data that purposedly portrayed traditionally White individuals as non-White. To some extent, this is true as Google’s guiderails overseeing AI image creation missed the mark. But at the same time, the AI image controversy demonstrates the challenges all AI platforms face in negotiating various biases. It’s not as easy as it may seem, and efforts to curb discriminations may actually enhance them. Thus, it’s important to decide when and where debiasing algorithms should be used. In either case, pleasing everyone isn’t likely as the Google Gemini controversy shows.

an AI giving some erroneous responses
No one wants an AI that creates images or makes statements too far removed from reality.

AI Guiderails and Safeguards

All generative AI platforms inherently have biases. Why? Because the Internet data upon which they trained contains human biases, both current and historical. It’s well recognized that AI tools will routinely portray professionals and CEOs as being White men in most circumstances. This is because White men routinely hold these positions in society, which is reflected on the Internet. In order to address these issues, AI companies must impose an oversight layer of guardrails and safeguards. Rather than always having CEO images as being male, the overriding algorithm encourages female CEOs instead. It is this type of algorithm that created the Google Gemini controversy. Due to a lack of fine-tuning, Gemini went a bit too far, inciting the AI image controversy to follow.

In most cases, AI companies employ a de-biasing algorithm to try and offer a more fair and non-discriminatory result. For example, Adobe’s Firefly encourages the use of diverse skin tones based on geographic demographics. If a user’s region has 20% African Americans, then that same percentage of people’s images will have than skin tone. The same is true for gender in such cases. Of course, these are not the only biases that AI companies must consider in preventing an AI image controversy. Other include ageism, classism, conservatism, and even urbanism. Left on its own accord, AI creations tend to lean toward urban environments, conservative appearances, and more youthful individuals. Once again, these outcomes are simply a result of what AI was trained on. In other words, bias in, bias out. This is why guardrails and debiasing algorithms are a must.

A Philosophical Dilemma

AI image controversy on someone's phone
An AI image controversy showed inherent biases in Google’s Gemini.

There is one additional thing that the Google Gemini controversy highlights. It’s evident that AI platforms have built-in biases based on the training data they receive. It’s also clear that de-biasing algorithms are important and must be continually monitored and refined. The current AI image controversy supports this. But what’s less evident is the character of the de-biasing approach to be used. On the one hand, encouraging greater diversity to avoid various discriminations is more equitable and ideal. However, when applying this to historical events and figures, this can impose fallacies and inaccuracies. In this regard, context matters, and for AI companies, determining how to address bias across various contexts is difficult. It might be fine to depict different race and gender diversity in some instances but not in others.

Basically, AI companies must walk a tightrope between the real and the ideal. In a idealistic world, the pope might be a woman or ethnic minority. People of all classes, races, genders, and walks of life could be CEOs and professionals. But this is not reality in the current world we live in as statistics show. This imbalance between the ideal and the real is what triggered the Google Gemini controversy to begin with. Ultimately, all generative AI companies must make a decision about their de-biasing philosophies. To what extent do they portray what’s reality and what’s the goal of society? Regardless of which way one leans, there’s likely to be an AI image controversy looming.

 

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!