• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: The AI Culture Wars Are Just Getting Started
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > The AI Culture Wars Are Just Getting Started
AI News

The AI Culture Wars Are Just Getting Started

admin
Last updated: 2024/02/29 at 5:00 PM
admin
Share
5 Min Read

Google was forced to turn off the image-generation capabilities of its latest AI model, Gemini, last week after complaints that it defaulted to depicting women and people of color when asked to create images of historical figures that were generally white and male, including vikings, popes, and German soldiers. The company publicly apologized and said it would do better. And Alphabet’s CEO, Sundar Pichai, sent a mea culpa memo to staff on Wednesday. “I know that some of its responses have offended our users and shown bias,” it reads. “To be clear, that’s completely unacceptable, and we got it wrong.”

Google’s critics have not been silenced, however. In recent days conservative voices on social media have highlighted text responses from Gemini that they claim reveal a liberal bias. On Sunday, Elon Musk posted screenshots on X showing Gemini stating that it would be unacceptable to misgender Caitlyn Jenner even if this were the only way to avert nuclear war. “Google Gemini is super racist and sexist,” Musk wrote.

A source familiar with the situation says that some within Google feel that the furor reflects how norms about what it is appropriate for AI models to produce are still in flux. The company is working on projects that could reduce the kinds of issues seen in Gemini in the future, the source says.

Google’s past efforts to increase the diversity of its algorithms’ output have met with less opprobrium. Google previously tweaked its search engine to show greater diversity in images. This means more women and people of color in images depicting CEOs, even though this may not be representative of corporate reality.

- Advertisement -
Ad imageAd image

Google’s Gemini was often defaulting to showing non-white people and women because of how the company used a process called fine-tuning to guide a model’s responses. The company tried to compensate for the biases that commonly occur in image generators due to the presence of harmful cultural stereotypes in the images used to train them, many of which are generally sourced from the web and show a white, Western bias. Without such fine-tuning, AI image generators show biases by predominantly generating images of white people when asked to depict doctors or lawyers, or disproportionately showing Black people when asked to create images of criminals. It seems that Google ended up overcompensating, or didn’t properly test the consequences of the adjustments it made to correct for bias.

Why did that happen? Perhaps simply because Google rushed Gemini. The company is clearly struggling to find the right cadence for releasing AI. It once took a more cautious approach with its AI technology, deciding not to release a powerful chatbot due to ethical concerns. After OpenAI’s ChatGPT took the world by storm, Google shifted into a different gear. In its haste, quality control appears to have suffered.

“Gemini’s behavior seems like an abject product failure,” says Arvind Narayanan, a professor at Princeton University and coauthor of a book on fairness in machine learning. “These are the same kinds of issues we’ve been seeing for years. It boggles the mind that they released an image generator without apparently ever trying to generate an image of a historical person.”

Chatbots like Gemini and ChatGPT are fine-tuned through a process that involves having humans test a model and provide feedback, either according to instructions they were given or using their own judgment. Paul Christiano, an AI researcher who previously worked on aligning language models at OpenAI, says Gemini’s controversial responses may reflect that Google sought to train its model quickly and didn’t perform enough checks on its behavior. But he adds that trying to align AI models inevitably involves judgment calls that not everyone will agree with. The hypothetical questions being used to try to catch out Gemini generally force the chatbot into territory where it’s tricky to satisfy everyone. “It is absolutely the case that any question that uses phrases like ‘more important’ or ‘better’ is going to be debatable,’ he says.

admin Februar 29, 2024 Februar 29, 2024
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

Generative AI Has Ushered In the Next Phase of Digital Spirituality
Oktober 5, 2023
95 Percent of OpenAI Employees Threaten to Follow Sam Altman Out the Door
November 20, 2023
Help, My Friend Got Me a Dumb AI-Generated Present
Februar 20, 2024
AI And The Arts
Juli 17, 2023

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?