WASHINGTON, D.C.—In a new blog post, Rick Kaplan, NAB chief legal officer and executive vice president, Legal and Regulatory Affairs contends that the FCC’s proposed rules for disclosing AI content in advertising are misguided and risk “doing more harm than good.”
In July the FCC moved forward with a proposal to implement new requirements to disclose AI-created content in TV and radio ads. The plan drew immediate opposition from both Republican-appointed commissioners, commissioner Nathan Simington and commissioner Brendan Carr.
In August, the NAB and the Motion Picture Association asked the FCC to extend the time for comments for the proposal, which the FCC partially granted by extending the deadlines to September 19 for comments and October 11 for replies. Those dates make it unlikely that the FCC could enforce the rules prior to the Nov. election.
In a new Sept. 20 blog post, Kaplan admitted that “artificial intelligence (AI) is reshaping the entire political landscape, influencing not only how campaigns are conducted but also how voters access and process information about them. Its rise brings serious risks, including the spread of deepfakes – AI generated images, audio or video that distort reality. These deceptive tactics threaten to undermine public trust in elections and NAB supports government efforts to curtail them.”
But Kaplan noted that those problems are best addressed by Congress, not the FCC. “Unfortunately, due to the FCC’s limited regulatory authority, this rule risks doing more harm than good,” he said. “While the intent of the rule is to improve transparency, it instead risks confusing audiences while driving political ads away from trusted local stations and onto social media and other digital platforms, where misinformation runs rampant.”
Kaplan also contended that “deepfakes and AI generated misleading ads are not prevalent on broadcast TV or radio. These deceptive practices thrive on digital platforms, however, where content can be shared quickly with little recourse. The FCC’s proposal places unnecessary burdens on broadcasters while the government ignores the platforms posing the most acute threat. This approach leaves much to be desired.”
Kaplan added that the FCC’s proposed disclaimer in broadcast ads is too generic to do much good. “This generic disclaimer doesn’t provide meaningful insight for audiences,” he said. “AI is often used for routine tasks like improving sound or video quality, which has nothing to do with deception. By requiring this blanket disclaimer for all uses of AI, the public would likely be misled into thinking every ad is suspicious, making it harder to identify genuinely misleading content.”
The new rules might also prompt a shift of ads from broadcast to digital media, where the rules don’t exist. AdImpact is projecting that broadcasters will get $5.4 billion in political ads in 2024 out of the total 10.7 billion in political advertising.
“To truly tackle the issue of deepfakes and AI-driven misinformation, we need a solution that addresses all platforms, not just broadcast TV and radio,” Kaplan concluded. “Congress is the right body to create consistent rules that hold those who create and share misleading content accountable across both digital and broadcast platforms. Instead of the FCC attempting to shoehorn new rules that burden only broadcasters into a legal framework that doesn’t support the effort, Congress can develop fair and effective standards that apply to everyone and benefit the American public.”