Steve Harvey, the well-known host of “Family Feud” and popular radio show commentator, has found himself at the intersection of humor and technology in recent years. While he’s used to sharing laughs with contestants and offering life advice to listeners, Harvey has also become an unexpected target of AI-generated memes. Many of these digital creations are lighthearted, depicting him in amusing scenarios such as living the life of a rockstar or humorously fleeing from fictional demons. However, the more troubling side of this trend involves malicious use of his likeness and persona, particularly concerning online scams.
In a disturbing twist, AI technology is being exploited by unscrupulous actors who use Harvey’s voice and image to perpetrate scams. Last year, he was one of several high-profile figures, including Taylor Swift and Joe Rogan, whose likenesses were manipulated by AI to promote deceptive schemes claiming to provide government grants. In one such instance, an audio clip mimicking Harvey’s voice falsely declared, “I’ve been telling you guys for months to claim this free $6,400 dollars,” showcasing the ease with which AI can fabricate credible-sounding messages.
Recognizing the significant threat posed by these scams, Harvey has taken a proactive stance by advocating for new legislation. His efforts are aligned with a growing concern in Congress, which is currently considering several bills aimed at regulating the unauthorized use of AI-generated content. One notable proposal is the No Fakes Act, designed to hold creators and platforms accountable for the malicious use of AI-generated images, videos, and audio without consent.
The bipartisan group of legislators supporting this act includes Senators Chris Coons from Delaware and Amy Klobuchar from Minnesota, alongside Republicans Marsha Blackburn from Tennessee and Thom Tillis from North Carolina. They plan to reintroduce the bill soon, solidifying its role within the broader legislative discussion around protecting individuals from the misuse of AI technology. Additionally, another piece of legislation currently before Congress, the Take It Down Act, seeks to criminalize AI-generated deepfake pornography, garnering notable support from various sectors, including prominent political figures.
In 2025, Harvey emphasized the increasing frequency of scams utilizing his likeness, stating, “An all-time high.” He expressed deep concern for fans and individuals who might be harmed by these nefarious actions. The essence of his worry stems from the fact that his reputation is built on a foundation of authenticity, which he feels may be compromised by these scams.
It’s not just Harvey who is vocal; other celebrities have joined the chorus of voices advocating for legislative action. Actress Scarlett Johansson, who has herself encountered AI impersonations, has spoken out about the urgency for effective regulation, pointing to a wave of AI advancements that other nations are addressing more rapidly than the U.S. Johansson’s statement underscores a prevalent fear that, without legislative safeguards, the misuse of digital technology will continue to escalate unchecked, creating a perilous digital landscape.
Highlighting the collective frustration, Harvey noted that the fundamental issue revolves around freedom of expression being misconstrued as a license for exploitation. He believes that Congress must intervene swiftly to prevent more people from falling victim to these emerging technologies’ misuse.
Efforts to curb the rampant rise of deepfake content have sparked innovation in monitoring and enforcement. Companies like Vermillio AI are at the forefront of this movement. Their platform, TraceID, partners with talent agencies and studios to track the illicit use of AI-generated content. Vermillio’s CEO, Dan Neely, highlighted the astonishing growth of deepfake content—going from 19,000 instances in 2018 to nearly a million created in mere minutes today, underscoring the urgency of addressing the challenges posed by AI technology.
Despite the advancements in tracking and legislation, critics caution against the potential flaws in the proposed laws. Several advocacy groups worry that the No Fakes Act could infringe on First Amendment rights and result in excessive regulation, hindering free expression. They argue for a more balanced approach that protects both rights and individuals from the dangers posed by unregulated AI misuse.
In a rapidly evolving digital ecosystem, celebrities are increasingly struggling to fend off impersonators who remain anonymous online. While certain services, such as Vermillio, cater to well-known individuals, many creators lack the resources to combat imitation effectively. Harvey’s call to action is clear: “The sooner we do something, I think the better off we’ll all be,” expressing urgency in protecting individuals from the looming dangers of AI-generated scams and malicious content.