FDA Issues Draft Guidances on AI in Medical Devices, Drug Development: What Manufacturers and Sponsors Need to Know

By: Jennifer Yoo , Hayan Yoon, Ph.D. , Pinar Bailey, Ph.D. , Jefferson Lin

What You Need To Know

  • The FDA recently issued draft guidances for the use of AI in medical devices, drugs, and biologics.
  • The guidances emphasize the need for comprehensive AI policies addressing risk evaluation, data management, transparency, validation, and cybersecurity for AI-enabled medical devices throughout the total product lifecycle (TPLC) and use of AI in regulatory decision making for drugs and biologics.
  • Early engagement will be key for manufacturers and sponsors as they develop, implement, and maintain their AI systems, especially when preparing submissions for marketing authorizations.

The Food and Drug Administration (FDA) has recently released two draft guidance documents (Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations and Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products) outlining recommendations for the use of artificial intelligence (AI) in developing products used in the medical and pharmaceutical fields subject to FDA approval.

These guidances reflect the FDA's evolving stance on this rapidly advancing technology and provide a framework for manufacturers and sponsors to develop, validate, and maintain AI systems. These recommendations are designed to ensure safety, effectiveness, and quality in AI-enabled applications, and emphasize the importance of robust AI policies to meet the FDA's expectations.

The Role of AI in Healthcare

AI is increasingly transforming healthcare, finding applications across medical devices and pharmaceutical development.

In medical devices, AI models power tools such as software that analyzes medical images to detect abnormalities, wearable health monitors providing actionable insights, and robotic systems enhancing surgical precision. In drug development, AI aids in predicting drug response or adverse reactions, integrating real-world data for better disease understanding, and optimizing manufacturing conditions.

These examples underscore the versatility of AI in advancing both medical device functionality and drug safety and efficacy.

AI in Medical Devices: FDA’s Key Principles

TPLC Framework. The FDA’s draft guidance for AI-enabled medical devices emphasizes a total product lifecycle (TPLC) approach. This framework ensures that AI systems remain safe and effective over time. Emphasizing the central principles of transparency and bias control, as well as risk-based approaches, the FDA’s guidance underlines the importance of careful data management, documentation, labeling and cybersecurity processes during development, validation, FDA submissions and post-market management, as well as early engagement with the agency relating to the foregoing, for successful TPLC.

Transparency. The FDA also views transparency as essential. The recommendations suggest that device manufacturers ensure their key information about AI functionalities is accessible and understandable to users.

The guidance also includes recommendations for a design approach to transparency, including a form “model card” to communicate key information about a model and the AI-enabled device. Transparency is also emphasized to ensure better usability of AI-enabled medical devices.

Bias Control. The FDA’s bias control concerns focus on minimizing demographic biases in training data to ensure that a device benefits all relevant demographic groups similarly and that the device is safe and effective for their intended use, avoiding incorrect results. The guidance emphasizes strategies to identify and address bias throughout the TPLC of AI-enabled devices, by collecting and documenting evidence.

AI in Drug Development: Model Credibility, Context, a Risk-Based Approach and Transparency

For drug and biologics sponsors, the FDA’s guidance emphasizes a risk-based credibility assessment framework for use of AI in regulatory decision-making for drugs and biologics. This includes defining the AI model’s specific role, i.e. what specific question, decision or concern the AI model will address and its context of use, assessing potential risks associated with the AI model, taking into account model influence and decision consequence, and developing a credibility assessment plan to ensure credibility of model outputs.

Much like in the AI-enabled device context, documentation of the data, model architecture and methodology, evidence pointing to credibility, as well as evidence that define the model’s limitations, will be key for both a successful credibility assessment plan and to enable the FDA’s determination of safety and efficacy in submissions.

Life cycle management practices—such as ongoing monitoring and periodic retraining—are recommended to ensure AI models remain suitable for their regulatory contexts. Early engagement with the FDA is also recommended to address model risk and discuss credibility assessment activities.

Building Effective AI Policies

For AI-Enabled Medical Devices

Manufacturers developing AI-enabled medical devices benefit from comprehensive AI policies that address risk assessment, data management, transparency, validation, and cybersecurity at all points of the TPLC. Risk assessments should identify and mitigate device-related risks throughout the lifecycle, including potential issues arising from incomplete or ambiguous information. Data management procedures should ensure data quality, diversity, and representativeness, alongside measures to address bias in training datasets.

Transparency strategies include designing user interfaces that effectively communicate device information and creating detailed labeling to inform users about model performance and limitations. Validation processes should rigorously confirm device safety and effectiveness, incorporating human factors and usability evaluations as outlined in the FDA’s Human Factors Guidance.

Cybersecurity measures should address device integrity, confidentiality, authorization, availability, and security updates as recommended in the FDA’s “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions.” Such measures can help avoid AI cybersecurity risks such as data poisoning, model stealing, data leakage, and performance drift.

The FDA guidance recommends inclusion of detailed descriptions on how manufacturers achieve these standards in submissions for marketing authorizations. Those submission requirements drive a great need for documentation of any and all data that is used in the training of the AI model, where it is sourced from, how it is used to train the AI model, the architecture of the AI model, the testing data, the evaluations conducted during validation and any observations relating to safety and efficacy and many other details.

For AI in Drug Development

Sponsors of drugs and biologics leveraging AI in regulatory decision-making should structure their policies around establishing credibility, transparency, and continuous improvement, using a risk-based approach.

To create a robust credibility assessment framework, sponsors should establish documentation policies in their AI policy to ensure the AI model’s role is defined including the specific details recommended by the FDA, to gather evidence of data and reliability and data relevance, and to evaluate and record model performance against predefined metrics. Documentation should clearly describe the model, its architecture and development process, describing in detail datasets and subsets used in training and tuning including how data sets were sourced, whether they are centralized and how they are annotated; document rationale used in model development and data selection, document model calibration, and description of quality assurance and control procedures of all computer software. More specifically, documentation should establish fitness for use for the data sets meeting standards of reliability and relevance, and techniques employed to prevent over- or under-fitting data. Policies should include management plans to monitor and adapt models over their life cycle, ensuring ongoing suitability for regulatory use employing a risk-based approach and having procedures to document deviations from the plan.

Sponsors and manufacturers should adopt and implement comprehensive AI policies since the FDA recommendations have implications and impacts beyond the regulatory processes.

For example, there are significant intellectual property (IP) implications related to transparency and early disclosures of AI models, data, and other details to the FDA as part of the submission and approval process. The disclosures to the FDA may include trade secrets, inventive process, or information that needs to be disclosed to the USPTO to meet the duty of candor and good faith. As such, the development and implementation of the AI policy should ensure IP protection while complying with regulatory expectations. The documentation policy also requires IP considerations, for example, related to preserving factual evidence of human contribution since AI-assisted inventions are patentable only if a human made a "significant" contribution. An AI policy adopted broadly across different functions of the company would ensure consideration of both regulatory and IP factors during development and at the time of submissions and promote consistency in statements made to the FDA, USPTO and similar foreign agencies.

Early engagement with the FDA is an invaluable step in aligning AI implementation with regulatory expectations as well. In addition to the documentation undertaking, sponsors should proactively discuss AI model risk and credibility assessment activities with the FDA to streamline the approval process and mitigate and address potential concerns earlier.

Shared Policy Foundations for AI Applications

Both manufacturers of AI-enabled devices and sponsors using AI tools in regulatory decision-making for drugs and biologics benefit from shared foundational elements in their AI policies. These include adopting a risk-based approach tailored to the application, ensuring data quality and fitness for purpose, and emphasizing transparency, in each case through comprehensive documentation.

Continuous improvement practices, including regular performance monitoring and updates, help maintain the reliability and safety of AI systems, and detailed records of AI processes and decisions help to ensure regulatory compliance and support transparent communication with stakeholders.

Navigating FDA Expectations

The FDA’s draft guidance on AI-enabled medical devices and AI in regulatory decision-making highlights the growing importance of integrating AI responsibly in healthcare.

Manufacturers and sponsors have an opportunity to align with these frameworks, fostering trust in AI technologies while advancing innovation and preparing for successful submissions for marketing authorizations.

By establishing robust AI policies that address these key considerations, stakeholders can ensure compliance, safeguard public health, solidify public trust, streamline commercial negotiations, and drive transformative advancements in medical technology and drug development.

Comments on the guidance documents are due by April 7, 2025.