1. Don’t exaggerate your AI product’s abilities. The FTC has recently warned businesses that false or deceptive claims about the capabilities of AI products can subject companies to liability, as it would for any deceptive claims in advertising. In the AI context, do not tout the AI capabilities of your product if there are none or if such capabilities are not a material attribute of your product or applicable claim. Make sure you have substantiation for your advertising claims about your AI product or product features. If referencing the capabilities of competitive products, you should ensure you have substantiation for such competitive claims. Remember regulatory actions may be brought not only by the FTC, but there is also risk of state attorney general action, consumer class actions and competitor claims for false advertising. If investors relied on your company’s indication that AI is a critical part of your product, you may also be at risk of additional claims.
2. Clearly disclose the rights and restrictions involved with your product. The FTC has expressed concerns that consumers are not being informed of the types of rights they are purchasing and the associated use restrictions for digital products and that AI can make these problems worse. Companies should clearly disclose the exact terms and rights being provided to or taken from consumers when ingesting any content on to a platform or selling products. Companies should also avoid unilaterally making material changes to these terms and conditions without consent from consumers who are expecting certain ownership or other usage rights, or from creators who are not expecting to give away certain rights in their creations.
3. Be clear if items are created using AI if the situation warrants. In certain circumstances, consumers may expect that their digital products or content are created by human creators rather than AI tools. If the consumer is misled or otherwise reasonably believes a product or content is human created and it is not, disclosure would be critical is critical to helping avoid any deception in the marketplace. While disclosure may not always solve the deceptive concern, failing to disclose how certain digital items are created could subject a company or creator to liability under Section 5 of the FTC Act and applicable state laws.
4. Disclose commercial relationships. If your AI’s output steers a consumer to a particular website or service provider because of a commercial or affiliate relationship with such website or provider, this relationship must be disclosed just like in any other circumstance where there is a material connection between a marketer and third-party that may impact the purchase intent of the end consumer. The FTC recently noted that the use of AI influencers and deepfake AI in advertising needs to be disclosed to consumers or it may be a violation of the FTC Endorsement and Testimonial Guides.
5. Evaluate and disclose the extent training data uses protected materials. The FTC recently warned that using protected materials, such as copyrighted creative works without consent, in training data for generative AI tools, may raise deceptive concerns – among other things – under consumer deception or unfairness laws. This is especially problematic if your generative AI product uses some of these protected materials in its outputs that are then used by consumers – opening them up to liability for use of the protected works. The FTC has made it clear this information could reasonably influence consumers’ and business’ decisions to use your generative AI tool. Be careful about the data training your AI models and carefully balance both the regulatory and IP risks (including potential fair use defenses) associated with training data.
6. Consider your product’s risks. If you are making or selling a product using generative AI, think about the implications of its use before advertising and/or selling your product. The FTC has been active against businesses who fail to take reasonable measures to prevent consumer injury from the use of their technologies. Think carefully about the composition of data sets to help ensure harmful biases or outcomes are avoided and consider how to prevent certain harms such as discrimination or other negative results that may violate an end user’s rights or may be unfair in the marketplace.
7. Be clear regarding your product’s risks. Given the complexity involved with AI products, consumers may not understand the risks or any potential implications involved with your product and its AI. However, you must understand and disclose these reasonably-foreseeable risks associated with your product. For example, regulators and the government in recent proposed legislation are focused on explainability, transparency and providing consumers the ability to opt out of AI decision tools.
8. Know AI’s role in your product. The FTC has investigated and will continue to investigate under the hood of products when analyzing the truth of claims that products are AI-enabled or AI-powered. The FTC has cautioned that there is a difference between products utilizing AI and those developed using AI tools.
9. Don’t use AI deceptively. Tricking consumers into harmful choices by using AI has been a theme commonly targeted in recent FTC actions. If your AI is manipulating customers into financial offers or purchases, this may be considered deliberately steering consumers in a harmful direction.
10. Hold yourself accountable. Your algorithm is your responsibility. Be as transparent as possible regarding the limitations of your product and any relationships that would influence a consumer. There are times in development where if you move too fast, unwinding your development can cause considerable expenses or impair your model entirely.