Recently, Google has been under the spotlight for its decision to re-edit an advertisement featuring its artificial intelligence (AI) tool, Gemini. This decision stems from a notable error within the ad that mistakenly inflated the worldwide popularity of Gouda cheese. Originally intended for broadcast during the highly watched Super Bowl, the commercial aimed to demonstrate the capabilities of Gemini. It showcased the AI assisting a cheesemonger from Wisconsin in crafting a product description. However, the ad claimed that Gouda constituted “50 to 60 percent of global cheese consumption,” a statistic that was quickly called out as “unequivocally false” by a blogger on social media platform X.
In response to this public critique, Google executive Jerry Dischler attempted to clarify the situation, suggesting that this instance was not an example of a common AI issue dubbed “hallucination,” where AIs generate false information. Dischler attributed the inaccurate figure to the various online sources that Gemini utilized for information, stating, “Gemini is grounded in the Web – and users can always check the results and references.” He emphasized that the misleading percentage appeared on multiple sites across the internet, implying that the fault lay with the data sources rather than Gemini itself.
To rectify the blunder, Google took the initiative to edit the commercial, ensuring that the revised version would no longer erroneously reference Gouda’s consumption figures. Following its completion, the updated advertisement was launched on YouTube, the video platform owned by Google, effectively removing any trace of the problematic statistic. Google’s representatives elaborated that the decision to remake the advertisement was influenced by discussions with the cheesemonger featured in the ad. Based on the cheesemonger’s feedback, it was decided to have Gemini craft a product description devoid of statistical data.
The entire situation highlights the complex interplay between advanced AI technologies and real-world applications. The incident is particularly embarrassing for Google, given the Super Bowl’s immense visibility and the scrutiny advertisements face during this event. The blogger Nate Hawke, who originally identified the error, described it as a clear example of “AI slop,” reflecting broader concerns about the reliability of AI-generated content in high-stakes environments.
This incident is not an isolated one for Google, as the tech giant has faced criticism previously over aspects of its AI products. A year ago, the Gemini tool was temporarily “paused” after backlash regarding its output, which included what some deemed “woke” representations, such as a revised image of the Founding Fathers that inaccurately included a Black figure. Additionally, the AI was criticized for providing bizarre commentary regarding cheese, previously suggesting that users should utilize “non-toxic glue” for improving pizza cheese adherence, and even humorously advising individuals to consume one rock per day, as per “geologists’ recommendations.”
Similar difficulties with AI products are not limited to Google alone. In January, Apple also found itself needing to suspend a news alert summarizer after it generated a series of inaccurate headlines, exposing the potential pitfalls of relying heavily on AI for content generation.
The arena of Super Bowl advertising is notoriously filled with its share of controversies, underlining the difficulty of balancing humor and information within such a critical marketing moment. For instance, last year’s Uber Eats campaign faced backlash that compelled the company to modify their advertisement due to its insensitivity towards serious issues such as food allergies. These factors illustrate the ongoing challenges both companies and consumers encounter as technology continues to advance rapidly, underscoring the necessity for vigilance and accuracy in the content produced by AI systems.