Designing Governance for AI Models Used in Safety, Quality and Clinical Domains


Designing Governance for AI Models Used in Safety, Quality and Clinical Domains

Designing Governance for AI Models Used in Safety, Quality and Clinical Domains

With the incorporation of Artificial Intelligence (AI), automation, and advanced analytics into pharmaceutical operations, regulatory affairs professionals are tasked with ensuring compliance with established regulations. This article outlines the framework for designing governance for AI models specifically in the safety, quality, and clinical domains, while aligning with regulatory compliance expectations from agencies such as the FDA, EMA, and MHRA.

Regulatory Compliance Context

In the current landscape of pharmaceutical development, AI technologies play a crucial role in enhancing productivity and accuracy. However, with these advancements come heightened scrutiny from regulatory agencies concerning data integrity and compliance. Regulatory compliance consulting becomes pivotal in ensuring that AI and automation technologies adhere to standards set by 21 CFR Part 11, EU Annex 11 requirements, and other GxP guidelines.

Legal/Regulatory Basis

21 CFR Part 11

21 CFR Part 11 establishes the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to traditional paper records. As AI models often generate or utilize electronic records, it is essential to ensure they comply with this regulation. Key aspects of Part

11 include:

  • Validation: AI systems must be validated to ensure accuracy, reliability, and consistency in performance.
  • Audit Trails: Systems must maintain appropriate audit trails that capture all changes made to electronic records.
  • Access Controls: Implementing user access controls to ensure that only authorized personnel can access and modify records.

EU Annex 11 Requirements

EU Annex 11 outlines the appropriate regulations for computerized systems in the pharmaceutical industry, emphasizing data integrity, system performance, and compliance. Under Annex 11, the governance of AI models necessitates:

  • Validation: All computerized systems, including AI, must be validated for intended use.
  • Data Integrity: Establishing strong systems to ensure data integrity throughout the lifecycle of the AI model.
  • Documentation: Adequate documentation to support the validation processes and ongoing maintenance.
See also  Using Advanced Analytics for Signal Detection, Quality Trending and Risk Management

Documentation

Effective documentation is a cornerstone of regulatory compliance, especially when dealing with AI systems. The key documentation components include:

Validation Documentation

Validation documentation should encompass:

  • User Requirements Specification (URS): Clearly delineate end-user expectations and system functionality.
  • Functional Specification (FS): Document how the system meets the user requirements.
  • Validation Plan: Outline the overall validation strategy, including acceptance criteria.
  • Test Protocols and Reports: Detailed descriptions of validation testing scenarios and results.

Data Governance Documentation

Documentation should also address data governance, including:

  • Data Quality Management Plan: Define standards for data quality, ensuring data is accurate, consistent, and reliable.
  • Data Lifecycle Management: Describe the processes for data creation, storage, usage, archival, and deletion.
  • Audit Trail Review Procedures: Document processes for regular review of audit trails.

Review/Approval Flow

The approval process for AI models should involve carefully defined stages to ensure compliance with regulatory requirements. These stages typically include:

Pre-Submission Stage

In this stage, regulatory affairs must:

  • Engage with cross-functional teams (e.g., CMC, Clinical, IT) to outline AI model objectives.
  • Conduct a gap analysis to identify any compliance shortfalls based on existing regulations.
  • Prepare initial documentation, including URS and FS.

Submission Stage

The submission stage consists of:

  • Compiling the complete validation documentation to form the submission package.
  • Preparing a regulatory submission strategy, determining whether the AI model will require a new application or variation based on its impact on existing processes.

Post-Submission Stage

Upon submission, it is critical to:

  • Prepare to respond promptly to agency queries about AI model functionality and validation.
  • Maintain a continuous feedback loop with stakeholders to ensure that all aspects of AI performance remain compliant post-approval.
See also  Bias, Transparency and Explainability: Regulatory Concerns for AI in Pharma

Common Deficiencies

When dealing with AI model governance, several common deficiencies can arise that regulatory authorities are likely to scrutinize:

Lack of Proper Validation

Regulatory agencies often cite a lack of robust validation processes as a major deficiency. To address this:

  • Establish comprehensive validation protocols that follow industry standards.
  • Ensure that systems are re-evaluated whenever significant changes occur.

Inadequate Audit Trail Management

Failure to maintain robust audit trails can lead to compliance issues. To mitigate this risk:

  • Define clear policies about how audit trails will be created and maintained.
  • Implement regular reviews of audit trails to quickly identify any anomalies or deviations.

Insufficient User Training

Inadequate training of users can lead to improper handling of AI systems. Address this deficiency through:

  • Comprehensive training programs designed for all user levels.
  • Regular refreshers and updates as new features or systems are integrated.

RA-Specific Decision Points

When to File as Variation vs. New Application

Determining whether to file a variation or a new application requires careful consideration:

  • Variation: If the AI model is an enhancement to an existing system that does not significantly alter its functionality or safety profiles, a variation is appropriate.
  • New Application: If the AI model introduces new modes of action, significantly changes the intended use, or impacts safety and efficacy, it should be submitted as a new application.

How to Justify Bridging Data

Justifying bridging data to support AI model submissions can be challenging. Key strategies include:

  • Providing a clear rationale for the relevance of the bridging data to the new AI model’s context.
  • Referencing regulatory precedents where similar data has been successfully accepted.
  • Incorporating real-world evidence to back data claims where applicable.

Conclusion

As pharmaceutical companies increasingly adopt AI, the need for effective governance frameworks becomes essential. Adhering to the regulatory landscape outlined by 21 CFR Part 11, EU Annex 11, and GxP guidelines is paramount. Regulatory affairs professionals must navigate the complexities of AI compliance through thorough documentation, structured workflows, and proactive engagement with regulatory authorities. By doing so, organizations can harness the benefits of AI while ensuring regulatory compliance and data integrity.

See also  Natural Language Processing for Regulatory Intelligence and Document Review

This article serves as a foundational guide for regulatory affairs, CMC, and labelling teams seeking to navigate the intricate landscape of AI governance in the pharmaceutical industry.