In February 2023, the Federal Trade Commission (FTC) issued guidance to companies to keep their artificial intelligence (AI) claims in check and urged them not to exaggerate the capabilities of their AI products or technology. We provided an analysis of this guidance, and since then, the FTC issued an expansion of their original guidance, asserting that their original guidance dealt with the “fake AI problem,” while this more recent guidance issued in March 2023 dealt with the “AI-fake problem.”
This AI-fake problem is closely tied with the issue of deepfakes, such as the viral Pope Francis puffer coat image. These deepfakes have become easier to create and increasingly convincing over the last few years. Such deepfakes can be created across various modalities, in that the synthetic media can take the form of images, video, or even audio, with the FTC specifically calling out risks related to “voice clones.” The FTC cites risk of such synthetic media being used deceptively in spear phishing emails, fakes websites, fake posts, fake profiles, fake consumer reviews, creating malware or ransomware, prompt injection attacks, as well as facilitating imposter scams, extortion and financial fraud. It should be noted that synthetic media can also be used fruitfully in the entertainment industry, however there are various legal complexities that require careful consideration.
The FTC warns companies to consider the following four issues if they plan to develop a generative AI product capable of creating synthetic media and deepfakes.