The U.S. Food and Drug Administration (FDA) released a draft guidance earlier today (07 January, 2025) outlining "Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products". This guidance, aimed at sponsors and stakeholders, proposes a risk-based framework for assessing the credibility of AI models to ensure their suitability in supporting regulatory decisions.
The guidance focuses on using AI to generate data or information for regulatory purposes, including evaluating safety, effectiveness, and quality across the drug product lifecycle. Applications outside this scope, such as drug discovery or operational efficiencies, are excluded. Sponsors who are uncertain about the relevance of their AI models are encouraged to engage with the FDA early.
AI in Regulatory Decision-Making:
AI technologies hold transformative potential for streamlining drug development and enhancing patient outcomes. For instance, AI can reduce reliance on animal-based studies, process real-world data, integrate genetic databases, and support manufacturing processes. However, dataset variability, bias, and model transparency require rigorous evaluation.
Framework for AI Model Credibility Assessment
The guidance introduces a seven-step framework to evaluate AI model credibility:
Define the Question of Interest: Clearly articulate the specific decision or concern the AI model aims to address.
Define the Context of Use (COU): Specify the model’s role, scope, and whether additional evidence will complement its outputs.
Assess Model Risk: Determine the model's influence and potential consequences of incorrect decisions.
Develop a Credibility Assessment Plan: Outline activities to establish the model's credibility, including data management, model development, and evaluation.
Execute the Plan: Implement the defined credibility assessment activities.
Document and Address Deviations: Compile results in a comprehensive report, noting any deviations from the plan.
Evaluate Model Adequacy: Decide whether the AI model suits the COU or requires modifications.
The FDA emphasizes the life cycle maintenance of AI models, particularly in manufacturing and post-marketing scenarios, where model performance may evolve over time. Sponsors are encouraged to use risk-based strategies for monitoring and updating models to maintain credibility.
The guidance encourages sponsors to engage with the FDA during the early stages of AI model development to ensure alignment with regulatory expectations. Various engagement options are provided, including formal meetings and programs tailored to clinical trials, manufacturing innovations, and real-world evidence generation.
By providing a structured approach to AI model assessment, the FDA aims to balance innovation with accountability, ensuring AI’s potential is harnessed responsibly.
For more details, refer to the official guidance here.
Comments