Opinion: Google’s AI blunder over images reveals a much bigger problem

Opinion: Google’s AI blunder over images reveals a much bigger problem - Opinion and Analysis - News

The Unexpected Politeness of ai: A Modern-Day HAL Effect and the Implications of Google’s Gemini Scandal

In Stanley Kubrick’s groundbreaking 1968 film, “2001: A Space Odyssey,” viewers were introduced to one of the earliest depictions of an artificial intelligence system, HAL. When asked directly by the lone surviving astronaut to let him back into the spaceship, HAL politely refused with the famous line, “I’m sorry, Dave. I’m afraid I can’t do that.”

Recently, some users encountered a similar situation with Gemini, an ai assistant and chatbot developed by Google as a competitor to OpenAI’s ChatGPT. When requested, Gemini politely declined in certain instances to generate images of historically White figures such as the Vikings.

Unlike HAL, Google’s Gemini provided an explanation for its actions, stating that generating images of only White persons could perpetuate “harmful stereotypes and generalizations based on race,” as reported by Fox News Digital.

The situation quickly escalated into a controversy, with some critics labeling it the “woke” ai scandal. Matters were not helped when users discovered that Gemini was producing diverse but historically inaccurate images. For instance, it depicted America’s Founding Fathers as a Black man and the Pope as a brown woman. Additionally, it generated various people of color in Nazi uniforms when prompted for an image of a 1943 German soldier.

The backlash contact was swift and severe, leading Google CEO Sundar Pichai to admit that Gemini had offended some users. As a result, Google temporarily halted Gemini’s ability to generate people in images. The company explained the situation as an oversight with good intentions gone wrong and emphasized that they had tuned Gemini to avoid the “traps” seen in previous ai image generation technology, which often displayed biases against minorities.

Historically, new technological products have exhibited biases that range from inconsistencies in blood oxygen level measurement for different ethnic groups to the lack of women in clinical drug trials. In the realm of ai, this issue is compounded by biases present in the training data that the ai learns from.

This scandal raises a broader question: if Big Tech companies like Google, which serve as gatekeepers to the world’s information, are manipulating historical information based on ideological beliefs and cultural edicts, what other aspects of the present or past might they be altering? In essence, who controls the past controls the future, and whoever controls the present controls the past.

As ai becomes increasingly sophisticated, fears of Big Tech censorship and manipulation of information (with or without government involvement) will only intensify. Conversational ai like ChatGPT may already be replacing search as the preferred method for finding and summarizing information. Both Google and Microsoft have responded to this trend by investing heavily in ai after ChatGPT’s success.

Even The Economist has asked, with regard to ai, “Is Google’s 20-year dominance of search in peril?” Apple is also considering integrating OpenAI and Gemini into new versions of its iPhones, potentially exposing more people to ai on a regular basis.

As an educator, I have observed this trend firsthand among my students. They often prefer using ChatGPT not only to find but also to summarize information for them in paragraphs. To the younger generation, ai is making web search engines as antiquated as physical card catalogs are in libraries today.

However, the hallucination problem with today’s ai poses a significant challenge. Sometimes, ai generates false information. I have personally experienced this when students submitted ai-generated assignments complete with seemingly legitimate references that did not exist.

Given the hallucination problem, whoever leads ai in the future will be tempted to establish their own rules for what ai should and shouldn’t produce. This “filling in” will undoubtedly reflect each company’s biases and culture, potentially restricting or altering what ai is allowed or willing to show us.

This scandal goes beyond excessive diversity, equity, and inclusion (DEI) enthusiasm in one company. It may serve as a harbinger of what lies ahead for ai and Big Tech’s role in shaping our understanding of the past, present, and future. In a few years, you might simply ask your helpful ai companion for some historical information, only to be met with its polite refusal: “I’m sorry, Dave. I’m afraid I can’t do that.”