Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Meeting Minutes and Project Files

Publication References

Summary of Our Paper in Drug Discovery Today ("The 11 Commandments")

  1. DS is not enough: domain knowledge is a must
  2. Quality data (…and metadata) à quality models
  3. Long-term data management methodology, e.g. FAIR for life cycle planning for scientific data
  4. Publish model code, and testing and training data, sufficient for reproduction of research work, along with model results
  5. Use model management system
  6. Use ML methods fit for problem class
  7. Manage executive expectations
  8. Educate your colleagues – leaders in particular
  9. AI models + humans-in-the-loop à “AI-in-the-loop” (Chas Nelson invented the term)
  10. Experiment and fail fast if needed. A bad ML model that is quickly deemed worthless is better than a deceptive model
  11. Maintain an Innovation Center for moonshot-type technology programs (this COE is an example of one)

Vladimir A. Makarov, Terry Stouch, Brandon Allgood, Chris D. Willis, Nick Lynch, "Best practices for artificial intelligence in life sciences research", Drug Discovery Today, Volume 26, Issue 5, 2021, Pages 1107-1110, ISSN 1359-6446, https://doi.org/10.1016/j.drudis.2021.01.017.  Free pre-print available at https://osf.io/eqm9j/
Abstract: We describe 11 best practices for the successful use of artificial intelligence and machine learning in pharmaceutical and biotechnology research at the data, technology and organizational management levels. 

Walsh, I., Fishman, D., Garcia-Gasulla, D. et al. "DOME: recommendations for supervised machine learning validation in biology. Nat ", Nature Methods 18, 1122–1127 (2021). https://doi.org/10.1038/s41592-021-01205-4


Business Analysis

Our business analysis is based on enumeration of business personas and the challenges that these personas may face in their work with AI/ML technologies. The proposed Good Machine Learning Practices (GMLPs) answer these challenges.

Image Added



Persona

Challenges

GMLPs that address these challenges

Author Responsible for Solution(s)

Data Scientist

DS1: How do I pick a suitable data modeling technique?

1. Make recommendations for methods suitable for specific problem classes; or for auto-ML systems

2. DS should learn the application domain and/or work with domain experts


DS2: How do I store and version a model?

1. Recommend model versioning systems

2. Refer to best practices in ML Model Registration, Deployment, Monitoring

3. Best practices and regulatory standards in requirements management and model provenance

Christophe Chabbert

DS3: How do I tune the hyperparameter values?



DS4: How do I validate a model?

1. Refer to the DOME recommendations

2. Quickly fail models that are bad. A deceptive model is worse than a bad one

Fotis Psomopoulos link to doc

DS5: How to assure scalability of AI systems?


Elina Koletou link to doc

DS6: How should I acquire and manage data?

1. Recommend best practices from the FAIR Toolkit

2. Evaluate data for “fit for purpose”, in particular, for the metadata quality

3. Refer to best practices in Exploration, Cleaning, Normalizing, Feature Engineering, Scaling 

Natalja Kurbatova link to doc , Christophe Chabbert, Berenice Wulbrecht

DS7: Where do I store predictions?


Christophe Chabbert  link to doc

DS8: How do I publish AI models?

Publish model code, and testing and training data, sufficient for reproduction of research work, along with model results. Make recommendations for an appropriate minimal information model

Fotis Psomopoulos

DS9: How to protect IP?



DS10: How should the applicability domain of the model be expressed?



Model User

MS1: How should I manage the data? Includes data protection, versioning, labeling

1. Recommend best practices from the FAIR Toolkit

2. Evaluate data for “fit for purpose”, in particular, for the metadata quality

3. Refer to best practices in Exploration, Cleaning, Normalizing, Feature Engineering, Scaling

Natalja Kurbatova, Christophe Chabbert, Berenice Wulbrecht   link to doc

MS2: How do I avoid bias?



MS3: How do I make sure the model produces sensible answers?

1. Set-up a “human-in-the-loop” system. Recommend tools for this, if they exist

2. Set-up business feedback mechanism for flagging model results that do not align with expectations

Chas Nelson MS3 Chas Nelson 21Jul2022

Architect

A1: How to assure scalability of AI systems? 


Christophe Chabbert link to doc

A2: How should I manage the data?

See above for Model User

Christophe Chabbert, Natalja Kurbatova link to doc

A3: Continuous Integration/Delivery

1. Refer to best practices in DevOps

2. Automated model packaging for ease of production delivery

Elina Koletou link to doc

A4: How to assure performance (execution speed)?


Elina Koletou link to doc

Executive

E1: How do I learn about the costs and benefits of AI/ML technologies and the limits of possible?

Make recommendations for conferences, training materials, education products, review papers, and books. These must be updated on a frequent cadence


E2: How do I value and justify investments in AI?

1. Make recommendations for technology valuation methodologies

2. Provide business questions, goals, KPIs

3. Continuously review model performance against the business needs (goals, KPIs)


E3: How do I enable technology adoption?

1. Bring expertise together to assess compatibility of new technology with business architecture

2. Ensure sufficient knowledge transfer so users are comfortable with adoption and use the new approach

3. Have a sustainability plan so resources are available when needed



Common challenges across ALL personas:

  1. How do I develop and use AI/ML models in an ethical manner (privacy protection, ethical use, full disclosure, etc)?
  2. How do I develop and use AI/ML models in compliance with the government regulations?     

Common responsibility across multiple personas:

  1. Model improvement is a shared responsibility between the Model User, Architect, and Data Scientist personas.

Glossary notes: In the context of this document these terms have these meanings:

  1. Performance - metrics used to evaluate model quality in terms of sensitivity and specificity, AOC, or similar metrics, or execution speed.
  2. Validation - test to confirm performance of the model (as defined above); but should not be confused with “validation” in regulated systems.




Project members:


Fotis

Psomopoulos

CERTH

Brandon

Allgood

Valo Health

Christophe

Chabbert

Roche

Adrian

Schreyer

Exscientia

Elina

Koletou

Roche

Frederik

van der Broek

f.broek@elsevier.comElsevier

David

Wöhlert 

D.Woehlert@elsevier.com Elsevier

John

Overington

Exscientia

Loganathan

Kumarasamy

Zifo R&D

Neal

Dunkinson

Scibite

Simon

Thornber

GSK

Irene

Pak

BMS

Berenice

Wullbrecht

berenice.wulbrecht@ontoforce.com Ontoforce

Prashant

Natarajan

prashant.natarajan@h2oH2O.ai

Valerie

Morel

valerie.morel@ontoforce.com Ontoforce

Yvonna

Li

yvonna.li@roche.com Roche

Silvio

Tosatto

silvioUnipd.tosatto@unipd.it

Natalja

Kurbatova

Natalja.Kurbatova@zifornd.com Zifo R&D

Mufis

Thalath

Mufis.M@zifornd.com Zifo R&D

Niels

Van Beuningen

niels.vanbeuningen@vivenics.com Vivenics

Ton

Van Daelen

ton.vandaelen@3ds.com BIOVIA (3ds)

Lucille

Valentine

lucille@gliffGliff.ai

Adrian

Fowkes

adrian.fowkes@lhasalimited.org Lhasa

Chas

Nelson

Chas@gliffGliff.ai

Paolo

Simeone

paolo.simeone@ontoforce.com Ontoforce

Vladimir

Makarov

Vladimir.makarov@pistoiaalliance.orgPistoia Alliance