The article describes in detail the approach to be applied when assessing the credibility evidence responsible parties are expected to submit. 

HSA Guidance on Change Notification: Overview

The Food and Drug Administration (FDA or the Agency), the US regulating authority in the sphere of healthcare products, has published a guidance document dedicated to assessing the credibility of computational modeling and simulation in medical device submissions.

The document provides an overview of the applicable regulatory requirements, as well as additional clarifications and recommendations to be taken into consideration by medical device manufacturers and other parties involved in order to ensure compliance thereto.

At the same time, provisions of the guidance are non-binding in their legal nature, nor they are intended to introduce new rules or impose new obligations.

Moreover, the authority explicitly states that an alternative approach could be applied, provided such an approach is in line with the existing legal framework and has been agreed with the authority in advance.

Introduction to Credibility Evidence in Computational Modeling 

First of all, the authority mentions that in such a sophisticated sphere of medical device regulatory submissions, “credibility evidence” appears to be one of the most important concepts.

As further explained by the FDA, it covers traditional validation activities, addressing any substantiation that supports the reliability of a computational model for its intended Context of Use (COU).

This spectrum of evidence includes diverse verification and uncertainty quantification (UQ) activities, each contributing unique insights into the model’s fidelity.

According to the guidance, the purpose of this is not merely to collect evidence but to discern and systematically categorize it.

Such an organized compilation of evidence serves to assess the model’s credibility, assuring regulatory bodies and stakeholders of its precision and applicability in clinical scenarios.

FDA on assessing credibility of computational modelling (credibility evidence overview)

Verification and Its Categories

Verification, a critical component of credibility, is divided into code verification and calculation verification.

Code verification is a rigorous process ensuring that numerical algorithms are accurately implemented within software, without errors that could compromise numerical precision.

It encompasses stringent software quality assurance and rigorous numerical code verification, as outlined in ASME V&V 40.

Calculation verification shifts focus to the estimation of numerical errors in model output, often attributable to decisions such as spatial discretization.

This form of verification is dynamic and can be incorporated at any stage of simulation, whether during validation or within COU-specific simulations.

Through these verification processes, a model’s numerical integrity is scrutinized and affirmed, laying a foundational element of credibility.

Validation and Its Role in Assessing Credibility

Validation stands distinct from calibration, emphasizing the comparison of model predictions with data that is independent from that used to construct the model.

This independent scrutiny is what supports a model’s credibility. As explained by the authority, validation is not merely a checkbox but a comprehensive assessment that extends to the applicability of the model to its COU.

This involves an applicability assessment, which evaluates the relevance and transferability of validation activities to the COU.

The assessment ensures that differences between the model’s validation conditions and its practical application do not undermine the validation’s relevance to the COU, thus maintaining the integrity of the model’s credibility.

Uncertainty Quantification and Sensitivity Analysis 

According to the guidance, UQ is pivotal in estimating the uncertainty inherent in model outputs. It considers the variability of inputs and the structural nuances of the model itself.

UQ is intrinsically linked to sensitivity analysis (SA), which addresses the influence of individual model inputs on outputs. SA can simplify UQ by identifying which inputs significantly affect outputs, thereby narrowing the focus for UQ efforts.

However, the ultimate objective of UQ is to quantify the uncertainty of the model outputs, providing a quantitative basis for the model’s credibility.

Both UQ and SA can be applied to validation or COU simulations, underscoring their versatility in enhancing model reliability.

Categorization of Credibility Evidence 

The categorization of credibility evidence into eight distinct categories serves to systematically arrange the evidence supporting a computational model.

This structure aids in organizing the evidence, however, it is neither exhaustive nor indicative of the quality or rigor of the evidence itself.

The categories are not ranked; rather, they serve as a framework to guide the compilation of evidence, ensuring a comprehensive and structured presentation in regulatory submissions.

Each category of credibility evidence is defined by specific characteristics and contexts of application.

For example, code verification results (Category 1) validate the absence of errors in the numerical implementation, while in vivo validation results (Category 4) validate the model’s predictions against biological data from living organisms.

The application of these categories contributes to a detailed understanding of the types of evidence that can substantiate a model’s credibility.

Regulatory Submission and Evidence Inclusion 

As explained by the FDA, the integration of credibility evidence into regulatory submissions should reflect the model’s associated risk.

This guidance does not prescribe specific evidence types for inclusion but suggests a comprehensive approach, considering factors like model type and the maturity of the modeling field.

The evidence should encompass aspects like code verification, calculation verification, and validation to holistically represent the model’s capabilities.

The application of credibility evidence is exemplified through practical cases, such as in silico device testing, which may involve multiple forms of credibility evidence to substantiate the model’s application.

These examples highlight the context-dependent nature of evidence selection and the importance of a tailored approach to evidence integration in regulatory submissions.

Conclusion

In summary, the credibility evidence is integral to supporting computational models for regulatory purposes. Developers are encouraged to proactively engage with regulatory feedback processes, such as the Q-submission process, to ensure the reliability of the approach applied.

How Can RegDesk Help?

RegDesk is a holistic Regulatory Information Management System that provides medical device and pharma companies with regulatory intelligence for over 120 markets worldwide. It can help you prepare and publish global applications, manage standards, run change assessments, and obtain real-time alerts on regulatory changes through a centralized platform. Our clients also have access to our network of over 4000 compliance experts worldwide to obtain verification on critical questions. Global expansion has never been this simple.

RegDesk is recognized as a Regulatory Intelligence Representative Vendor! Learn more by reading the 2024 Gartner® Market Guide for Regulatory Intelligence Solutions.

Get the report