2018-05-08 Project Team Meeting - Agenda & Minutes
Date
Attendees
Richard Norman (Unlicensed) (Pistoia Alliance)
Francisco Fernandez (Deactivated) (Abvance Biotech)
- Bryan Jones (Deactivated) (Eli Lilly)
- Chris Lloyd (Deactivated) (MedImmune)
- Carmen Nitsche (Deactivated) (Pistoia Alliance)
- Abhinandan Raghavan (Deactivated) (Novartis)
- Kieran Todd (Deactivated) (GSK)
Agenda
Time | Item | Who | Output |
---|---|---|---|
17:00 - 18:00 (60 mins) | >Value proposition and ‘sliding-scale scenarios’ for top level management proposal. >Sign up on IP3. | Richard, All | >Agree value proposition and draft scenarios for top level management
|
If time permits | >EMBL-EBI presentation outline. |
Minutes
Scope
>We should distinguish between early screening (high throughput) and profiling (detailed, low-throughput) assays even though both are included in the scope. Expectation is there will be more data points for early screening vs. profiling assays.
Value proposition
Specific added value articulated:
>Sharing data from mAbs which are poorly behaved would provide added value at least beyond the Jain et al., publication as they considered only reasonably well behaved mAbs.
>Developing a minimum set of ‘value’ assays which can be used for each project having previously understood the data output overlaps and correlations between assays currently used in each organisation.
>Organisations have a variety of early high-throughput assays which provide a qualitative sense of whether a molecule should be progressed but what is lacking is rigour in endpoint prediction from these assays. Generating enough data from enough molecules with varying properties/behaviours will enable higher confidence in endpoint data prediction early thus enabling earlier decision-making on developability of a molecule.
Key questions and discussion (can be used for further stakeholder engagement):
Due to expected diversity and variability in methodology and assay formats, should existing data be shared (it might be difficult to compare similar data effectively) or generated from scratch following ‘standardisation’ of assay formats?
>The consensus is that we start sharing existing data even though it may be harder to interpret and standardise at first. This is supported by the following:
-Agreement that it might be difficult to compare data across companies (this can already be challenging internally given ‘day-to-day’ variability, hence why controls are in place) unless some measure is in place e.g. binary scoring based on set flags. However, not all assays will display the same level of variability and we could start standardising based on assays whose data is most similar across companies.
-To add value and understand full range of data we would need to standardise based on selected assays and conditions, so picking and sharing data on molecules which have a range of properties and behaviours is key. This will only be possible using historical data in the first instance.
-We need to understand what best conditions are for each assay and which provide the most ‘value’. For this we need to look at existing data before we can generate new data based on standardised assay conditions.
Would a ‘standardised’ assay be used by organisations going forward i.e. would it replace an existing assay?
>The consensus is that although it may not replace an existing assay per se, it would add value to understand what a ‘standardised’ assay of a certain type looks like and where the variability of the internal version lies. This is supported by the following:
-This may be a challenge due to use of different platform formulations at different organisations. Some assays may be more predictive of particular platform formulations. However, if using a ‘standardised’ assay is for the purpose of flagging poorly behaved molecules then perhaps it doesn’t matter.
-If we have enough data some will be standardisable which is where the value creation lies.
-Need poor molecules to add a sense of scale but must be mindful that if we have too much data from poor molecules it may be difficult to standardise assays and draw correlations between the data.
-Standardisation can be done at many levels. At a simple level it could provide a sense of the variation between similar assays run at different organisations. Perhaps sharing SOPs and buffer conditions could be part of an upscale scenario.
How different or what is the added value of having a set of ‘bang for buck’ assays vs. the currently used early selection (high throughput) assays?
>The consensus appears to be that we should perhaps focus on early (high throughput) assays to get a real understanding of these to optimise early decision making, especially if as a consequence this understanding can be used to predict endpoint data more reliably.
Scenarios
>Scenarios need further development, and pressure testing by internal executive and legal sponsors, but agreement that current proposal illustrates a basecase scenario.
>Potentially;
Basecase – structural descriptors vs. sequence/structure; high-throughput assays, profiling assays, endpoint assays for published (patented) molecules and unpublished molecules from dead projects.
Upscale – sequence/structure, detailed SOPs, assays not included in basecase? ‘other’ unpublished data?
----------
Richard Norman – 9 May 2018