2026 Edition: AI & Machine Learning for Medical Devices

  • Friday
  • February
  • 20
  • 2026
Time:
10:00 AM PST | 01:00 PM EST
Duration:
60 Minutes
Charles H. Paul Instructor:
Charles H. Paul
Webinar Id:
54659

More Trainings by this Expert

Price Details
$149 Live
$299 Corporate Live
$199 Recorded
$399 Corporate Recorded
Combo Offers
Live + Recorded
$299 $348 Live + Recorded
Corporate (Live + Recorded)
$599 $698 Corporate
(Live + Recorded)
Price Detail Options
Overview:

Artificial Intelligence (AI) and Machine Learning (ML) are rapidly transforming the medical device industry.

From diagnostic imaging and digital pathology to implantable devices, remote monitoring applications, and personalized treatment recommendations, much of the future of healthcare relies on software that can learn, adapt, and support complex decision-making. Unlike traditional deterministic software, where outputs are predictable and static, AI/ML systems generate dynamic outcomes based on evolving algorithms and real-world data.

This significant shift introduces new regulatory, safety, ethical, and lifecycle control challenges that device manufacturers must understand. This webinar provides a comprehensive and practical exploration of these challenges and offers a roadmap for compliance, development, and risk control.

The session begins by defining how AI/ML differs from conventional medical device software and why the technology demands a fundamentally different approach to validation, oversight, and change control. Participants learn how regulators such as FDA and the European Union view AI/ML technologies not as black-box tools but as medical products requiring full lifecycle assurance, algorithm transparency, clinical reliability, and post-market performance control.

The webinar explains emerging regulatory frameworks and standards, including FDA's Total Product Lifecycle approach and the Predetermined Change Control Plan (PCCP), as well as EU MDR requirements, ISO 13485, IEC 62304, ISO 14971, and emerging AI-specific standards such as ISO/IEC 23894.

Risk management forms a central part of this training. Attendees learn to identify and mitigate unique AI/ML hazards such as data bias, model drift, cybersecurity vulnerabilities, adversarial manipulation, and unequal model performance across different patient populations.

The importance of data quality is emphasized, including training dataset selection, validation controls, real-world performance monitoring, and governance for data integrity. The webinar explains how bias, poor sampling, data contamination, or unrepresentative patient populations can directly impact the clinical safety and effectiveness of the device.

Participants also receive practical guidance on design controls, testing, validation, and clinical performance evaluation of AI/ML models. The session explores software verification and validation methods tailored to AI, including scenario-based testing, edge case evaluation, robustness testing, post-market learning rules, and confidence threshold management.

The differences between static (locked) algorithms and adaptive learning systems are explained, with step-by-step insights into how real-world model updates must be planned, documented, and controlled.

A critical portion of the training addresses cybersecurity and ethical obligations. Because AI/ML models rely heavily on data integrity and connectivity, cybersecurity weaknesses can influence predictions or allow malicious tampering with models. Ethical considerations such as explainability, transparency, and prevention of discriminatory outcomes are presented as core regulatory requirements, not optional business values. Participants learn how to document and communicate these controls to demonstrate regulatory compliance, patient safety, and trustworthiness.

By the end of this webinar, attendees will understand how to develop, validate, document, and maintain AI/ML medical devices in a way that meets global regulatory expectations and protects patients through robust lifecycle control, transparency, and proactive risk management.

Why should you Attend: AI and Machine Learning are rapidly reshaping the medical device industry, and professionals who understand how to develop, validate, and maintain these technologies responsibly will be at the center of the next wave of healthcare innovation. However, unlike traditional software, AI/ML models introduce unpredictable outputs, evolving behaviors, and significant ethical and regulatory challenges. This training helps participants navigate these complexities by providing practical, real-world guidance on how to build compliant, safe, and trustworthy AI-enabled devices.

Students who take this webinar will gain the critical knowledge needed to align their work with FDA, EU MDR, and global standards while understanding how regulators expect algorithms to be controlled, monitored, and updated especially when models continue to learn after deployment. They will learn how to address unique AI risks such as dataset bias, cybersecurity threats to algorithms, data governance weaknesses, and model drift that can compromise clinical safety.

This training also helps attendees understand how to design and validate AI/ML systems so they perform reliably across diverse patient populations. Participants will learn how to plan clinical performance evaluation, implement human oversight where needed, and document transparency, explainability, and ethical safeguards that regulators increasingly require.

Whether a participant is involved in design, risk management, clinical evaluation, software development, regulatory affairs, or quality assurance, this knowledge will become essential in the years ahead. Taking this course will not only strengthen professional expertise but also position individuals as leaders in one of the most transformative areas of modern medical technology.

Areas Covered in the Session:

  • Welcome & Context for AI/ML in Medical Devices (5 minutes)
    • Growth of AI/ML-enabled medical technologies across diagnostic, therapeutic, and monitoring tools
    • How AI differs from traditional software (non-deterministic outputs, continuous learning)
    • Why regulators are updating expectations for safety, transparency, and lifecycle control
    • Key assurance goals: patient safety, clinical validity, data robustness, and algorithm traceability
  • Regulatory Landscape & Expectations (12 minutes)
    • FDA approach to Artificial Intelligence/Machine Learning (AI/ML) medical devices
      • Benefit-risk lens applied to data, outputs, clinical decisions
      • Total Product Lifecycle (TPLC) expectations
    • EU MDR classification considerations and expectations for software as part of device functionality
    • Key regulatory themes:
      • Algorithm transparency and explainability
      • Data integrity and bias mitigation
      • Change management for evolving models
    • Standards shaping AI/ML device development:
      • ISO 13485, IEC 62304, ISO/IEC 27001, ISO/IEC 23894 (AI risk)
    • FDA's Predetermined Change Control Plan (PCCP) concept
  • Risk Management & Data Considerations (15 minutes)
    • Unique risk profile of AI/ML software:
      • Bias risk, model drift, data contamination, cybersecurity influence on predictions
    • Medical device risk management alignment to ISO 14971
    • Data sources and their impact on safety & model reliability:
      • Training vs. validation vs. real-world data
      • Data representativeness, cleaning, and lifecycle provenance
    • Data integrity and governance requirements
    • Risk control methods:
      • Performance monitoring rules
      • Human-in-the-loop controls
      • Confidence thresholds and decision logging
    • Evidence expectations for risk-driven testing
  • Design, Validation & Real-World Performance (15 minutes)
    • Design control expectations for AI/ML models within device development
    • Verification & validation of algorithms:
      • Scenario-based testing
      • Edge case evaluation
      • Adversarial robustness testing
    • Clinical performance evidence:
      • Validation on diverse and representative datasets
      • Post-market data requirements and real-world performance monitoring
    • Managing evolving algorithms:
      • Locked vs. adaptive models
      • PCCP documentation elements (model updates, retraining triggers, acceptable ranges of change)
    • Traceability expectations across model design, validation, and lifecycle decisions
  • Cybersecurity & Ethical Requirements (10 minutes)
    • Cybersecurity as a quality & safety driver:
      • Threats to algorithms (data poisoning, model hijacking)
      • Vulnerability management, monitoring, and testing
    • Ethical obligations in AI/ML devices:
      • Minimizing bias, enabling explainability, ensuring patient trust
      • Maintaining transparency in decision-making and risk communication
    • Documentation that demonstrates ethical and cybersecurity controls
  • Closing Takeaways & Q&A (3 minutes)
    • Assurance of AI/ML devices depends on data quality, lifecycle control, and model transparency
    • Validation must reflect real-world behavior, risks, and model adaptability
    • Ethical, cybersecurity, and explainability requirements are inseparable from compliance
    • Continuous monitoring & change control are essential for safe AI/ML deployment

Who Will Benefit:
  • Medical Device Software Development Teams
  • AI/ML Engineering & Data Science Groups
  • Quality Assurance (QA) and Quality Systems
  • Regulatory Affairs & Compliance Teams
  • Risk Management & Safety Engineering
  • Clinical Affairs / Clinical Evaluation Teams
  • Cybersecurity & Data Protection Teams
  • Information Technology (IT) / Cloud Services
  • Design & Development Engineering
  • Data Governance and Data Integrity Teams
  • Post-Market Surveillance / Vigilance Groups
  • Supplier & Vendor Management / Procurèrent
  • Product Management and Technical Leadership
  • Technical Documentation / Technical Writing Specialists
  • Research & Development (R&D) for Digital Health and Diagnostics


Speaker Profile
Charles H. Paul is the President of C. H. Paul Consulting, Inc. - a regulatory, manufacturing, training, and technical documentation consulting firm - celebrating its twentieth year in business in 2017. He has been a regulatory and management consultant and an Instructional Technologist for 30 years and has published numerous white papers on various regulatory and training subjects. The firm works with both domestic and international clients designing solutions for complex training and documentation issues.

He has held senior positions in consulting and in corporate training development prior to forming C. H. Paul Consulting, Inc. He also worked for several years in government contracting managing the development of significant Army-wide training development contracts impacting virtually all of the active Army and changing the training paradigm throughout the military.

He has dedicated his entire professional career explaining the benefits of performance-based training


You Recently Viewed