The Federal Trade Commission (FTC) continues to issue guidance on the use of generative artificial intelligence (AI) and the potential regulatory scrutiny facing companies and creators using these new tools in the market. While the FTC has previously addressed issues such as exaggerating the use of AI in a product or the potential deceptiveness related to deepfakes or synthetic media, the most recent guidance focused on the impact of the use of generative AI in the creation of content and related digital products arising from third-party copyrighted content used in the training or ultimate creative outputs.
As social media has rapidly become the primary source of both how consumers interact with content and even purchase products, the FTC has long scrutinized the potential harms to consumers in interacting with content, products and marketing in social media channels. In the context of consumer endorsements and native advertising, the FTC has intensely focused on, and brought multiple enforcement actions and issued warnings to companies, emphasizing the importance of proper transparency to consumers. More specifically, the FTC has focused on disclosure of endorsements that consumers may rely on when purchasing products or interacting with product reviews, and ensuring a consumer is well aware that they are interacting or about to interact with advertising versus editorial content. For example, in the native advertising context, the FTC has stressed the concept of “deceptive door openers,” which lead consumers to engage in viewing content or purchasing activities before receiving all necessary disclosures.
Striking a similar theme in the context of AI, the FTC is now raising concerns with respect to the potential deceptive nature that AI-generated content or products can have in the context of consumer purchases and related activities. With respect to content, for example, the FTC cautions that marketing songs generated by AI as the work of specific recording artists or selling books written by AI as the work of humans is likely inherently deceptive. The FTC notes that this accords with their longstanding guidance that “[c]ompanies are always obliged to ensure that customers understand what they’re getting for their money.”
In this most recent guidance, the FTC not only emphasized deception in the context of the traditional end consumer but also expressed that content creators have reasonable expectations with respect to their rights in the content they have created and how such content can be used. In particular, the FTC noted that unilateral changes to terms and conditions that change these creators’ expectations could be deceptive. To that end, it is important for a platform to adequately disclose changes to their terms that have a material impact on creators and require affirmative consent to those changes. The FTC asserts that it “may take a close look if such a platform isn’t living up to promises made to creators when they signed up to use it.” Relatedly, companies often state that consumers can “buy” digital products like books, music, movies and games, when they are actually only granting them a limited, revocable license. Companies should always help their consumers understand what they are paying for and what they are receiving. While neither of these issues is AI-specific, their relevance is resurfacing with the explosion of generative AI, especially regarding the rapid-fire updating of generative AI platforms’ terms of service.
Finally, the FTC cautions that generative AI tools trained on copyrighted or protected material could raise issues rising to the level of being unfair or deceptive, noting that this is “especially true if companies offering the tools don’t come clean about the extent to which outputs may reflect the use of such material.” This is the most intriguing aspect of the FTC guidance, in our view, because it adds yet another hurdle for those who are building or using generative AI tools and LLMs to consider (in addition to the copyright and related intellectual property considerations that are still in flux in the courts). This positioning by the FTC indicates that it is not just training on user data without informing rightsholders that could be construed as deceptive, rather that training on any protected material without disclosing that fact to consumers may be deceptive and in violation of Section 5 of the FTC Act. The FTC observes that this information might inform a consumer’s decision to use one tool over another. Companies building AI tools will have to wrestle with the tension around telling customers whether and to what extent their training data includes copyrighted or otherwise protected material, which could potentially mitigate risk of FTC enforcement on this concern but might potentially increase the risk of litigation by rightsholders of that training data.
If we extrapolate this further to the creative outputs of the AI tools themselves, it is certainly possible that almost any use of AI in any creative content or other creative output would similarly need to be disclosed (e.g., #AIcreation) if a consumer thought they were interacting with human-produced content or was otherwise deceived by their interaction or purchase of products. The FTC has already made clear in its recently updated guidance on endorsements that any content created by a non-human influencer must be disclosed.
Here are some key considerations to keep in mind:
Also published by The Licensing Journal.