Google Temporarily Suspends Gemini AI Image Generator Due To Historical Pictures Inaccuracies

Acknowledging the need for addressing "recent issues" related to historical inaccuracies, Google emphasized its commitment to resolving these concerns promptly.

Google Temporarily Suspends Gemini AI Image Generator Due To Historical Pictures Inaccuracies
Photo by Pawel Czerwinski / Unsplash

Google has announced a temporary suspension of Gemini, its flagship generative AI suite, from generating images of people. This decision comes in response to recent instances of historical inaccuracies observed in the generated depictions of humans. The company stated on social media platform X that it is currently working on updating the technology to ensure improved historical accuracy in the outputs.

Acknowledging the need for addressing "recent issues" related to historical inaccuracies, Google emphasized its commitment to resolving these concerns promptly. As part of this effort, the image generation of people through Gemini will be paused until the updated version, aimed at delivering more accurate depictions, is released.

The Gemini image generation tool was introduced by Google earlier this month. However, instances of incongruous depictions, such as U.S. Founding Fathers portrayed as individuals of different ethnic backgrounds, surfaced on social media platforms, eliciting criticism and scrutiny.

In response to the observed inaccuracies, Google confirmed its awareness of the situation, noting that it is actively working to address and improve the historical image generation depictions. While Gemini's AI image generation capabilities are designed to produce a diverse range of people, the recent inaccuracies underscore the need for refinement in this aspect of the technology.

Generative AI tools like Gemini rely on training data and other parameters to produce outputs. Despite advancements in AI, such tools have faced criticism in the past for generating biased or stereotypical imagery, highlighting the importance of ongoing refinement and oversight.

Google's previous AI image classification tool faced controversy in 2015 when it misclassified black men as gorillas. While the company pledged to rectify the issue, subsequent actions were deemed insufficient, with Wired reporting that Google resorted to simply blocking the technology from recognizing gorillas altogether.