FTC Updates Rules to Address AI Deepfake Threats to Consumer Safety
The US Federal Trade Commission (FTC) proposed making new updates on an artificial intelligence (AI) deepfake rule on February 16. The government agency said the proposed rule changes would protect users from AI impersonations.
According to the ‘Rule on Impersonation of Government and Businesses’ document, AI deepfakes that impersonate businesses and governments could face legal action.
No AI Deepfakes Allowed for Businesses and Government Agencies
The FTC said the changes are necessary due to the prevalence of impersonations of businesses, government officials, and parastatals.
The endgame is to protect customers from possible harm incurred from generative AI platforms.
The updated rule will come into effect 30 days following its publication in the Federal Register.
For now, public comments are welcome for the next 60 days. Once the rule is enacted, the FTC will be empowered to go after scammers who defraud users by impersonating legitimate businesses or government agencies.
The AI industry has come a long way since the famous launch of ChatGPT in November 2022 by the OpenAI team. The company, led by Sam Altman, has recently launched a new product called Sora.
Sora uses AI prompts to generate realistic videos with highly detailed scenes, complex camera motions, and vibrant emotions.
Powerful AI tools like those offered by OpenAI and Google have increased productivity for many people and businesses.
However, they have also become an effective tool in the hands of cybercriminals. With the tool, criminals can easily alter the appearance or voice of someone to deceive a target audience.
The FTC rule change will come down hard on these criminals to ensure they face the full weight of the law.
While there is no concrete rule that makes AI-generated recreations illegal, US Senators Chris Coons, Marsha Blackburn, and Thom Tillis have taken steps to address the issue.