Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Interested, who to contact?

Nick Lynch (Unlicensed)

Vladimir Makarov (PM AI Centre of Excellence)

The Problem:

When AI is used in healthcare and life sciences many of the same regulatory issues arise as when other technologies are used in Healthcare. 
This means that the impact on the regulatory process will have big implications and needs to be considered now to maximise the value of the AI in the overall process.

These issues are wide ranging and include the availability and use of data, public trust, mitigating risks, security, algorithmic bias, accountability, transparency, reproducibility and explainability of AI algorithms. Various AI codes of conduct have been proposed and for instance European Commission & FDA have released draft AI ethics & regulatory guidelines in early 2019.

Regulators have asserted the need to regulate a category of artificial intelligence systems whose performance constantly changes based on exposure to new patients and data in clinical settings. These AI and machine-learning systems present a particularly  problem for regulators, because of the changes that might be happening in a continuous sense compared to the fixed model of traditional regulatory submission.

FDA Proposal April 2019

The challenge as an industry is to collaborate as a group, understand the implications and with the regulators.

The Proposal:

We welcome your input to this group and the next steps.

Setup a community with a series of TC meetings during Q2 and Q3. 

Bring the community together in a couple of locations over the next 6 months including Boston Oct 2019 and a European meeting.

Work with regulators in NA and Europe to begin the discussions

Have invited speakers at these meetings

This is now the AI Centre of Excellence Community

  • No labels