Starting in November, Google will require political advertisements to prominently disclose when they feature synthetic content — such as images generated by artificial intelligence — the tech giant announced this week.
Political ads that feature synthetic content that “inauthentically represents real or realistic-looking people or events” must include a “clear and conspicuous” disclosure for viewers who might see the ad, Google said Wednesday in a blog post. The rule, an addition to the company’s political content policy that covers Google and YouTube, will apply to image, video and audio content.
The policy update comes as campaign season for the 2024 US presidential election ramps up and as a number of countries around the world prepare for their own major elections the same year. At the same time, artificial intelligence technology has advanced rapidly, allowing anyone to cheaply and easily create convincing AI-generated text and, increasingly, audio and video. Digital information integrity experts have raised alarms that these new AI tools could lead to a wave of election misinformation that social media platforms and regulators may be ill-prepared to handle.
AI-generated images have already begun to crop up in political advertisements. In June, a video posted to X by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s then-top infectious disease specialist, were tricky to spot: They were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”
The Republican National Committee in April released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington, DC, to whom CNN showed the video did not notice it on their first watch.
In its policy update, Google said it will require disclosures on ads using synthetic content in a way that could mislead users. The company said, for example, that an “ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need a label.
Google said the policy will not apply to synthetic or altered content that is “inconsequential to the claims made in the ad,” including changes such as image resizing, color corrections or “background edits that do not create realistic depictions of actual events.”
A group of top artificial intelligence companies, including Google, agreed in July to a set of voluntary commitments put forth by the Biden administration to help improve safety around their AI technologies. As part of that agreement, the companies said they would develop technical mechanisms, such as watermarks, to ensure users know when content was generated by AI.
The Federal Election Commission has also been exploring how to regulate AI in political ads.
Read the full article here