As artificial intelligence continues to transform multiple industries, AI agents have emerged as one of the most promising—and compelling—applications of AI in the crypto space. From automated trading bots, to token-powered “personalities,” to applications that streamline the execution of crypto transactions, AI agents have the potential to redefine how users interact with digital assets and make decentralized finance more accessible. But as with any novel technology, the use of AI agents also brings legal risks that companies should consider before deploying them in consumer-facing services.
An AI agent is a computer program that uses AI to perform tasks based on the tools made available to it (e.g., integrations with third-party services). Like the chatbot applications that have become ubiquitous in the last couple of years, AI agents are powered by large language models (LLMs), but instead of only interacting with users in a conversational way, AI agents are connected to software tools that the agent software can execute based on triggers from its environment. In blockchain applications, AI agents can assist with formatting blockchain messages, administer decentralized decision making, interact through social media accounts, and swap tokens.
We outline below five key risk areas that companies should keep in mind when considering incorporating AI agents in their service offerings.
1. Consumer Protection Regulations: Federal and state consumer protection regulations prohibit unfair, deceptive, or abusive acts or practices in commerce, and agencies like the Federal Trade Commission (FTC) have repeatedly warned that use of AI in service offerings does not change how these general rules are applied. Conduct that is likely to materially mislead reasonable consumers, such as making false representations about the reliability or capability of an AI agent, could expose companies to enforcement actions and penalties. In addition, regulators have indicated that AI-powered eligibility determinations for certain consumer products and services may be within the scope of fair lending laws and the Fair Credit Reporting Act, which require transparency in decision-making, protection against discriminatory impacts, and proper adverse action notices in certain circumstances. State laws may also apply, including California’s SB 1001, requiring companies to disclose to consumers when a chatbot is being used to communicate or interact with a person in California for the purposes of advertising or selling goods. Companies should be mindful of these principles when creating legal agreements, technical documentation, and marketing materials for services leveraging AI agents. Further, the FTC has made it clear that companies should, at the design stage of the service, consider the reasonably foreseeable ways in which AI services could be misused or cause harm, and take reasonable measures to prevent consumer injury. Performing appropriate risk assessments before AI agent-based services are launched is crucial to avoid running afoul of consumer protection regulations.
2. Financial Services Regulations: Where AI agents interact with or facilitate the provision of financial services, their creators should carefully consider whether there are applicable compliance obligations under existing laws or regulations. For example, using software to raise funds from U.S. investors under an investment contract is likely to be considered a securities offering subject to regulation under the Securities Act.
3. AI-Specific Laws: AI-specific laws continue to emerge in several jurisdictions, and further legislative activity can be expected in the U.S. and abroad as AI technology matures and becomes widely adopted. Examples include California’s AB 2013 (which requires certain disclosures about training data by developers of AI systems), California’s SB 942 (which requires developers of AI systems to offer AI detection tools), and Colorado’s SB 24-205 (which requires developers of high-risk AI systems, including financial services, to provide certain disclosures and implement risk management policies). While many existing laws are oriented mainly to developers of foundational AI models, they indicate increasing legislative activity which may affect how companies deploying AI agents operate their businesses. Companies should closely monitor the evolving legislative landscape.
4. Tort Liability: If users of AI agent-powered services feel harmed by the services, companies could face tort claims, such as claims based on theories of negligence or product liability. A negligence claim would argue that the company breached its duty of care in deploying or managing the service, while a product liability claim would assert faulty design, insufficient warnings about potential dangers, or breaches of implied warranties. Companies should consider these issues when designing the services and crafting the agreements that will govern their use. Ensuring that the applicable terms of service include appropriate warranty disclaimers, limitation of liability and dispute resolution terms may help mitigate the risks associated with this type of claim.
5. Breach of Contract for Underlying LLMs: For companies relying on third-party LLMs (whether open source or not), it is important to verify that the licenses and terms governing use of such LLMs permit the intended use case. It is not uncommon for these licenses and terms to include restrictions on use of the LLMs for certain purposes, such as providing financial advice. Failure to comply with these restrictions could expose companies to claims for breach of contract and result in service suspensions and disruptions.