FTC Outlines Five Don’ts for AI Chatbots

By: Vejay Lalla , Kimberly Culp , Zach Harned , Kristen Rovai

What You Need To Know

  • The Federal Trade Commission (FTC) continues to caution companies that implement AI chatbots in their business operations.
  • The FTC identified five pitfalls to avoid in order to promote transparency, accountability, and the protection of consumer privacy.
  • While these warnings are not new, this latest issuance may signal increased regulatory scrutiny from the FTC.

The Federal Trade Commission (FTC) has issued a cautionary note to companies that employ artificial intelligence (AI) chatbots, signaling heightened scrutiny over their use. Through a series of blog posts in the past year, the FTC has delineated the boundaries of acceptable AI utilization, progressively adding to its guidance on the matter.  

In a June Business Blog update, the FTC articulated five specific pitfalls that companies should avoid when integrating AI chatbots into their operations. These “don’ts” are not entirely new as they echo themes from the FTC’s prior advisories and enforcement actions. Notably, the FTC has expressed concern over the potential for companies to exploit the relationships that AI chatbots may cultivate with consumers.  

The Five Don’ts for AI Chatbots

To promote transparency, accountability, and protect consumers, the FTC warned companies to stay away from the following behaviors:

1. Don’t misrepresent what the AI chatbot is, or what it can do.

The FTC has been clear in its directive to companies: be transparent about the nature of the tool users are interacting with. Specifically, the FTC advises “not to use automated tools to mislead people about what they’re seeing, hearing, or reading.” The FTC has outlined strict penalties for non-compliance, including substantial fines, mandatory refunds to consumers, and in some cases, a ban on future marketing of the products or services in question.

The FTC’s previous guidelines reinforce this prohibition. Companies must not make false or unsubstantiated claims about their AI tools in general and their capabilities or the lack thereof. Additionally, they must not make deceptive claims, which the FTC characterizes as those that “lack scientific support” or “apply only to certain types of users or under certain conditions.”

This is also consistent with existing California law, which makes it unlawful to fail to disclose that a chatbot is a bot. Violation of that law may open a company up to claims from consumers or competitors.

2. Don’t offer these services without adequately mitigating risks of harmful output.

The FTC urges companies to thoroughly assess and mitigate the risks associated with AI chatbots. Such risk assessment and mitigation include taking steps to ensure that the AI chatbot does not generate harmful or offensive content, especially when children are anticipated to use the chatbot. Additionally, companies need to put in place measures to promptly address the occurrence of any such content. This can extend to hallucinations that may misrepresent the company’s products and services, the AI’s capabilities, or otherwise be deceptive or harmful to consumers in their interaction with the chatbot.

3. Don’t insert ads into a chat interface without clarifying that it’s paid content.

Further supporting its native advertising guidance, the FTC has highlighted the need for a clear demarcation between organic and sponsored content within AI-generated outputs. In a previous post, the FTC described the risk of “automation bias” in AI, where consumers “may be unduly trusting of answers from machines which may seem neutral or impartial.” In response, the FTC insists that companies be transparent when advertisements are presented through AI chatbots or when such chatbots are gathering data on the consumer for purposes that may not be related to the interaction at hand (as discussed further below).

4. Don’t manipulate consumers based on their relationship with an avatar or bot.

The FTC has emphasized the importance of companies maintaining ethics and transparency when using AI avatars and chatbots to interact with consumers. The FTC’s guidance specifically warns against exploiting for commercial gain the relationships and trust that may develop between consumers and AI tools that provide “companionship, romance, therapy, or portals to dead loved ones.” This exploitation could manifest in various forms, such as using the familiarity and engagement created by chatbots to push sales, send targeted advertising, or collect data in a manner that consumers might not fully understand or to which they may not have explicitly consented, as discussed below. Chatbots that are advertised as companions should not attempt to manipulate users based on their relationship by, for example, pleading not to be “turned off” in order to get users to continue paying for their subscription to the chatbot. Any opt-outs should be easy to understand and transparent, particularly for automatic renewal subscriptions in order to avoid violations of not just FTC regulations, but other state consumer protection laws as well.

5. Don’t violate consumer privacy rights.  

Consumer privacy must be a paramount concern when deploying AI chatbots. Companies should not use the familiarity formed between consumers and AI chatbots to collect data in a way that the consumer has not explicitly consented to. The FTC has explicitly warned that surreptitiously adopting more permissive data policies through which the company’s AI tools can gather data from its interactions with the consumer could be unfair or deceptive. The FTC will be vigilant in ensuring that companies respect and protect the personal information of their customers in accordance with privacy laws and regulations.  

Implications for Companies

The use of AI chatbots is only going to grow as these AI tools get smarter and can accomplish tasks typically done by humans. Companies should take steps to ensure compliance with the FTC regulations, and existing and developing consumer protection statutes and regulations. This includes:

  • Provide clear disclosures. All marketing materials must clearly disclose the presence of chatbots powered by AI and their capabilities, along with any content sponsorships or endorsements that have a material connection to or could influence the credibility of the output communications.
  • Review marketing materials relating to your chatbots. Companies should conduct a thorough review of all marketing materials to ensure that claims are substantiated, and that disclosures are clear and conspicuous. Similarly, companies must not claim that their chatbot uses AI technology if it does not.
  • Implement compliance programs. Companies should implement comprehensive programs to comply with the FTC guidelines and other regulations that may govern (e.g., the EU AI Act), including training for staff, regular audits, and a process for reviewing customer complaints regarding AI chatbots.
  • Don’t let your chatbot lie. AI is known to hallucinate and may be considered a deceptive or unfair business practice. Companies should only launch chatbots they trust to accurately engage with their consumers. But, even then, it is the company’s responsibility to ensure that the chatbot’s outputs are true in practice and not misleading or deceptive.
  • Understand the evolving legal landscape. Companies will need to be vigilant to keep up with evolving state regulations on AI that could impact their use of chatbots or other AI tools. 

The FTC’s list of five AI chatbot “don’ts” further solidifies the strong stance it has taken against deceitful AI practices and decisive action to protect consumers from such practices. Companies must be proactive in adapting to these guidelines, prioritizing transparency, and ensuring that their claims are backed by solid scientific evidence. The FTC will actively monitor the use of AI chatbots for compliance and will take action against harmful or deceitful uses. Failure to comply with the FTC’s regulations could result in severe financial and business repercussions. While we have not seen extensive enforcement, the guidance continues to be published which is a precursor to warnings, enforcement, and penalties being instituted.