Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Define the most common research questions in target discovery and validation. Establish an agreement between the project team that these are indeed the core target discovery business questions, and rank order them by vote by perceived relative importance. If such questions are many, pick the top ones. Establish an agreement on how many exactly. One can use this paper as a starting point for listing of relevant competency questions: https://www.sciencedirect.com/science/article/pii/S1359644613001542 (Failure to identify business questions, or picking too many or too few is a project risk)

  • Open Targets can serve as a publicly available standardized data source for this use case. Validate that Open Targets either has a ready to use Knowledge Graph implementation, or can be converted into a KG with reasonable cost (this is known project risk)

  • Select a Large Language Model engine from publicly accessible sources. (Failure to identify a suitable open LLM is a project risk)

Prompt-tuning procedure:

  • Retrieval Augmented Generation (RAG):

    • Ask plain English question using prompt-tuned version of one of the questions from the business questions collection

    • This question is converted into a structured query by an LLM (Failure to generate a proper query for a KG database system is a risk)

    • Execute this query over a structured controlled data source (e.g. Open Targets DB)

    • Convert raw output of the query into human-readable input by an LLM

    • An expert compares this answer or answers with the expected one(s)

  • An accuracy metric is computed + inputs and outputs saved in some DB

  • Prompt-tune the opening plain text question to maximize the output quality

  • Repeat this tuning cycle for all business questions in the collection

Historic (obsolete) versions of the above steps:

  • Historic vision for the query (this is obsolete and is only preserved here for scope traceability):

    • Define a plain text data source for mining; one of the choices can be the entire set of paper abstracts indexed in PubMed plus perhaps the entire collection of open-source papers. (Failure to download this large volume of data can be a risk)

  • Ask a plain English question from the collection of business questions identified above

  • An LLM uses the question and the data source to produce a human-readable answerHistoric vision for the QA (likely not necessary):

    • Either same or some other LLM converts this answer to a Knowledge Graph. (KG generation from text is a source of risk)

    • This answer Knowledge Graph is compared to the KG of the original data source (such as Open Targets). This comparison must be local; in other words, irrelevant sections of the larger knowledge graph should not be considered. (Ability to find a ready KG comparison algorithm or to code it fresh

    is a risk).
  • An accuracy metric is computed + inputs and outputs saved in some DB

  • Prompt-tune the opening plain text question to maximize the output quality

  • Repeat this tuning cycle for all business questions in the collection

  • Use these optimized prompts in RAG, below

Retrieval Augmented Generation (RAG):

  • Ask plain English question using prompt-tuned version of one of the questions from the business questions collection

  • This question
    • is

    converted into
    • a

    structured query by an LLM (Failure to generate a proper query for a KG database system is a
    • risk)

    Execute this query over a structured controlled data source (e
    • .

    g. Open Targets DB)Convert raw output of the query into human-readable input by an LLM
  • Quality assurance: it is desirable to produce a few instances of questions with known answers, ideally, by human experts, and then use these question-answer pairs as sanity checks of the RAG pipeline

Not in scope:

  • Proprietary data are not used to train any model and project participants are not asked to share proprietary data in any way. But proprietary data may be analyzed in the context of this set-up by the individual participants using private instances of the pipeline.

  • Training of a brand new LLM is out of scope. The plan is to only prompt-tune an existing LLM.

...

Phase

Milestones

Deliverables

Est Date

Initiation

Project charter

  1. A list of candidate pre-competitive projects

  2. One or more projects selected by vote

  3. Project charter is drafted for the winning idea

  4. Raise minimal funds for the Elaboration phase

12/11/23

(Complete)

Elaboration

  1. Development plan

  2. Cost estimates

  1. Risks analysis – see Risk Registry below

  2. Technology analysis to address the identified risks

  3. Work Breakdown Structure (WBS)

  4. Cost estimates

  5. Time estimates

  6. Gantt Chart for Construction with additional iterations as needed and a work schedule

  7. Make feasibility decisions before committing to build

Q1 2024

Construction

  1. Target discovery pipeline

  2. Lessons learned published

  1. Target discovery pipeline – detailed deliverables are not yet known

  2. Lessons learned recorded and published

TBDQ3 2024

Transition

Sustainability achieved

  1. Place the prototype into maintenance mode or outsource for continuous development by another organization (e.g. non-profit)

  2. Plan extension work, if any

TBD

Risk Registry

Risks in green are resolved

Risks in yellow are in active research

Risks in white are general in nature

Technology research, feature and cost analysis, and selection
  • Technology research

  • If no ready-to-use technology exists, estimate bespoke development
  • If estimates indicate infeasibility, this may become a gap

  • Need to assemble a new

    Description

    Mitigation

    Responsible Party

    Failure to identify business questions, or picking too many or too few

    Draft appropriate business questions - DONE; but not all business questions can be answered with specific technologies, so must take this factor into account

    Lee Harland, John Wise, Bruce Press, Peter Revill

    Validate that Open Targets either has a ready to use Knowledge Graph implementation, or can be converted into a KG with reasonable cost

    The Hyve
    Jordan Ramsdell
    Robert Gill
    Brian Evarts

    Open Targets/EBI:

    • Sebastian Lobentanzer

    • Ellen McDonagh

    Failure to identify a suitable LLM

    • See this comparison

    • Recommend to focus on the Cypher query generation ability as the key risk (below)

    • Start with one open-source and one closed-source LLMs (say Mistral and GPT 4) and agree to explore others later, and meanwhile close this risk

    Jon Stevens, Etzard Stolte, Helena Deus; Brian Evarts; Wouter Franke, Matthijs van der Zee

    Does Open Targets use an ontology?

    Perhaps The Hyve team has a ready answer

    Failure to download a large volume of data (all of the PubMed as a maximum) for the prompt-tuning of the LLM

    TBD

    Failure to perform KG generation from text by an LLM

    1. Technology research

    2. If no ready-to-use technology exists, estimate bespoke development (tuning an existing LLM for this purpose)

    Failure to perform local KG comparison with calculation of a score

    Failure to generate a proper query for a KG database system by an LLM

    Technology research.

    • See refs 7, 8, 13, 14 below

    • BioCypher by EBI may have this capability already - needs evaluation

    The Hyve
    Jordan Ramsdell
    Robert Gill
    Brian Evarts

    Open Targets/EBI:

    Sebastian Lobentanzer

    Ellen McDonagh

    Does Open Targets use an ontology?

    Yes in general

    The Hyve

    Failure to download a large volume of data (all of the PubMed as a maximum) for the prompt-tuning of the LLM

    CLOSED RISK

    This may be unnecessary, TBD

    Failure to perform local KG comparison with calculation of a score

    CLOSED RISK

    Failure to build a prototypical target discovery pipeline on the limited budget in case of mounting technical difficulties

    Schedule the project in phases. Aim to answer known unknowns and to establish risk mitigation strategies early in this phase (“project elaboration”)

    Is there need for an architecture that would be broadly applicable beyond the narrow POC?

    • Technology research

    • Need agents that can collect data from any source

    • How is BioChatter/BioCypher architecture deficient?

    Some proprietary LLMs may be censored, thus introducing uncontrollable bias in the answers that they produce

    • DONE: Censorship may already be included in the performance scores, so this is taken care of in the comparison of the LLMs. However, there is team preference for open-source and uncensored LLMs

    Identified and resolved in the LLM sub-team

    Project Stakeholders

    Sponsors:

    ...

    Lessons Learned

    • The highest risk item is generation of the structured query (Cyphrer or SPARQL) from a plain English request. Some publications estimate success rate of about 48% on the first attempt.

    • The structure of the database used for queries matters. LLMs can easier produce meaningful structured queries for databases with flat, simple structure.

    • Practically useful system requires filtering or secondary mining of output in addition to natural language narration.

    • It is extremely important to implement a reliable named entity recognition system. The same acronym can refer to completely different entities, which can be differentiated either from the context (hard) or by asking clarifying questions. Must also map synonyms. Without these measures naïve queries in a RAG environment will fail.

    • LLMs may produce different structured queries starting from the same natural language question. These queries may be semantically and structurally correct, but may include assumptions on the limit of the number of items to return, or order, or lack of these. These variations are not deterministic. As a result on different execution rounds the same natural language may result in different answers. It is necessary to explicitly formulate the limits, order restrictions, and other parameters when asking the question, or to determine the user’s intentions in a conversation with a chain of thought. A question related to this topic, is whether specifics in the implementation of usual RAG models with a vector database may introduce implicit restrictions on what data is explored by the LLM and what data is not, and thus artificially limit the answers. This may be happening without the user knowing the restrictions (and perhaps even without the system’s authors knowing that they introduced such restrictions embedded in the specifics of the system architecture).

    References

    1. https://www.sciencedirect.com/science/article/pii/S1359644613001542

    2. https://www.nature.com/articles/s41573-020-0087-3

    3. https://www.epam.com/about/newsroom/press-releases/2023/epam-launches-dial-a-unified-generative-ai-orchestration-platform

    4. https://epam-rail.com/open-source

    5. Open LLM Leaderboard: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

    6. Chatbot Arena: https://chat.lmsys.org/?arena

    7. Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning

      https://arxiv.org/abs/2310.01061

    8. Knowledge-Consistent Dialogue Generation with Language Models and Knowledge Graphs

      https://openreview.net/forum?id=WhWlYzUTJfP&source=post_page-----97a4cf96eb69--------------------------------

    9. BioChatter Benchmark Results: https://biochatter.org/benchmark-results/#biochatter-query-generation

    10. MBET Benchmark (embeddings) https://huggingface.co/spaces/mteb/leaderboard

    11. Lora-Land and Lorax: https://predibase.com/lora-land

    12. A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases. Summary: queries over a KG with GPT 4 are much more accurate than queries over a SQL database with GPT 4. https://arxiv.org/abs/2311.07509

    13. https://towardsdatascience.com/evaluating-llms-in-cypher-statement-generation-c570884089b3

    14. https://medium.com/neo4j/enhancing-the-accuracy-of-rag-applications-with-knowledge-graphs-ad5e2ffab663

    15. linkedlifedata.com

    16. Kazu - Biomedical NLP Framework: https://github.com/AstraZeneca/KAZU

    17. https://github.com/f/awesome-chatgpt-prompts/tree/main

    18. Zhou, L., Schellaert, W., Martínez-Plumed, F. et al. Larger and more instructable language models become less reliable. Nature 634, 61–68 (2024). https://doi.org/10.1038/s41586-024-07930-y

    19. Karthik Soman, Peter W Rose, John H Morris, Rabia E Akbas, Brett Smith, Braian Peetoom, Catalina Villouta-Reyes, Gabriel Cerono, Yongmei Shi, Angela Rizk-Jackson, Sharat Israni, Charlotte A Nelson, Sui Huang, Sergio E Baranzini, Biomedical knowledge graph-optimized prompt generation for large language models, Bioinformatics, Volume 40, Issue 9, September 2024, btae560, https://doi.org/10.1093/bioinformatics/btae560

    20. https://www.promptingguide.ai/

    21. References on Named Entity Recognition in biological sciences: Pubmed

    22. Incremental Knowledge Graphs Constructor Using Large Language Models