Team: The Hyve; Jordan Ramsdell, Robert Gill, Brian Evans + Open Targets/EBI team: Sebastian Lobentanzer, Ellen McDonagh
Link to the working doc: https://docs.google.com/document/d/18vPi23prPnrBOX3xAQInilqC6C4ssy2neOl05DDSH1Y/edit?usp=sharing
2024.04.12 Sub-team meeting
Recording: https://pistoiaalliance-org.zoom.us/rec/share/HAVWHW7_RTJW8jWk-acRxqvp3Z0sLNgrk7XUNazYz2I1sCYY78VYBeL_5LfxUmng.2bNwqJbp8nChbndC?startTime=1712930915000 Passcode: K2UD1@Fu
The Hyve team presented a working instance of BioCypher / BioChatter and ran a few queries live. This prototype can only handle queries on a few topics (not a full OT KG inside) and displays answers in tabular or RDF form, not as plain text:
Slides for the demo: https://docs.google.com/presentation/d/1D6qOeAlcZgMAfI5PNe5ZQ5YfZBQQHWSV/edit?usp=drive_link&ouid=111803761008578493760&rtpof=true&sd=true
Questions used in the demo: https://docs.google.com/document/d/1dHmPn1B6ToKqUBVJXryT10oKqaaqcvAS/edit?usp=drive_link&ouid=111803761008578493760&rtpof=true&sd=true
The issue of unreliable Cypher query generation is seen as the biggest problem in the project
VM to inform the sponsors of the above and confirm with them [DONE: sponsors agree]
2024.04.19 Sub-team meeting
Passcode: PMik3&s%
Main objective: need to set-up a testing environment to systematically evaluate and improve LLM ability to generate Cypher queries
We will initially test GPT4 and Mistral (picks by the LLM selection team)
How is Cypher query generation testing done by the BioCypher team?
Results: https://biochatter.org/benchmark/
Process: https://biochatter.org/benchmarking/ and https://biochatter.org/benchmark-developer/
The focus on the repeat_instruction contents?
For the RFP, think on how to implement Abbvie/Jon’s method in this environment
Make sure that the English question, generated query, and the result are displayed for human evaluation
Propose a set of English questions that we will use (limited by the current OT contents in BioCypher) - Confirm with The Hyve that the questions in our list currently flagged as feasible are indeed such
Do we need to write “ideal” Cypher queries for these questions? - yes, make sure we understand the questions asked -this is a line item in the RFP
VM Fwd questions in column M to experts
Comment from Etzard: in his system elastic search is used across documents to by-pass the failing SPARQL queries generated by LLM, is this conceptually similar to the method proposed by Abbvie? VM shared the recording of the Jon’s talk; ask for the slides
In general, it is good for us to collect these and similar hacks that force LLMs to produce better queries
Action item for the team (all members): please think about any additional requirements that we need to include into the RFP
Note, if payments (beyond minimum cloud expenses reimbursement) are needed we must do a competitive RFP/RFQ
2024.05.03 Sub-team meeting
Recording:
Passcode: jm+.s6T.
There were internet connectivity failures during the call. Thus recording may be imperfect.
A testing environment to systematically evaluate and improve LLM ability to generate Cypher queries requires vendor support, and hiring a vendor requires an RFP. Vladimir informed the team about the upcoming RFP and the process for it
We will need to create "correct" answers to the scientific competency questions that we plan to use in the POC. ZS colleagues volunteered to perform this service. Details will be decided next week in a call between Vladimir and Bruce Press. This is FYI only, no action needed.
Participants observed that not all questions contained in our scientific competency question list can be answered based only on the information in Open Targets. For example, any questions that refer to clinical trials may not have complete answers based on the Open Targets contents alone. These issues will not have an effect on the POC project, however. In the future we may have to ask the project funders whether they would like to invest in data improvements or not.
We will have to brainstorm techniques for improvement of LLM performance in writing Cypher queries. (Or any structured query language).
A registry for proposed techniques that would contain high-level or pseudocode algorithmic descriptions of them: https://docs.google.com/document/d/18vPi23prPnrBOX3xAQInilqC6C4ssy2neOl05DDSH1Y/edit?usp=sharing
Peter Dorr shared DOIs to papers that describe other techniques in this field - already captured in the brainstorming document
Peter Dorr agreed to organize a talk by his organization in one of our main team (Wednesday) meetings, where the methods developed by his company can be shared. Exact date TBD
2024.05.10 Sub-team meeting
Passcode: W^rVQ6W=
We agreed to focus on making edits and additions to the list of methods in https://docs.google.com/document/d/18vPi23prPnrBOX3xAQInilqC6C4ssy2neOl05DDSH1Y/edit?usp=sharing
We agreed to skip the call on May 17th (VM will cancel it) and meet again on May 24th