top of page
Writer's pictureSharan Murugan

IMDRF Guidance: Good Machine Learning Practice for Medical Device Development: Guiding Principles

The International Medical Device Regulators Forum (IMDRF) has published a draft guidance (01 July, 2024) "Good machine learning practice for medical device development - Guiding Principles" outlining the good machine learning practice (GMLP) principles for medical device development. This document provides guiding principles to ensure the safe, effective, and high-quality development of artificial intelligence (AI) and machine learning (ML) in medical devices.


It applies to all stages of the lifecycle of AI/ML-enabled medical devices, from design and development through post-market surveillance. The principles outlined are intended to be applicable globally, considering various regulatory requirements.


AI technologies in healthcare offer transformative potential by extracting significant insights from vast data. AI development, however, presents unique challenges due to its iterative, data-driven nature.


This guidance introduces 10 guiding principles for Good Machine Learning Practice (GMLP) aimed at fostering the development of safe, effective AI-enabled medical devices. It encourages international collaboration to advance research, create educational tools, and harmonize standards.


The 10 Guiding Principles are

  1. Intended Use and Multidisciplinary Expertise

  • Understand the device’s intended use and leverage expertise throughout its lifecycle to ensure safety and effectiveness.

  1. Engineering, Design, and Security Practices

  • Implement robust software engineering, medical device design, and security practices, including risk management and traceability.

  1. Representative Clinical Study Participants

  • Ensure datasets represent the intended patient population to generalize results and manage biases effectively.

  1. Independent Training and Test Datasets

  • Maintain independence between training and test datasets to ensure reliable validation.

  1. Fit-for-Purpose Reference Standards

  • Use well-characterized reference standards to support model robustness and generalizability.

  1. Tailored Model Choice and Design

  • Design models suited to available data and intended use to mitigate risks like overfitting and performance degradation.

  1. Human-AI Team Performance Assessment

  • Evaluate model performance within the intended use environment, considering human factors like user skills and errors.

  1. Clinically Relevant Testing Conditions

  • Develop test plans that reflect clinical conditions and assess performance across relevant subgroups and environments.

  1. Clear User Information

  • Provide users with essential information about the device, including its intended use, benefits, risks, and performance.

  1. Monitoring and Risk Management of Deployed Models

  • Ensure ongoing monitoring and manage retraining risks to maintain model safety and performance in real-world use.


For further details, please refer to the full guidance document here.



Comments


bottom of page