Query generation with LLM
Team: The Hyve; Jordan Ramsdell, Robert Gill, Brian Evans + Open Targets/EBI team: Sebastian Lobentanzer, Ellen McDonagh
Link to the working doc: Techniques to improve LLM performance for structured query generation
2024.04.12 Sub-team meeting
Recording: Video Conferencing, Web Conferencing, Webinars, Screen Sharing Passcode: K2UD1@Fu
The Hyve team presented a working instance of BioCypher / BioChatter and ran a few queries live. This prototype can only handle queries on a few topics (not a full OT KG inside) and displays answers in tabular or RDF form, not as plain text:
Slides for the demo: https://docs.google.com/presentation/d/1D6qOeAlcZgMAfI5PNe5ZQ5YfZBQQHWSV/edit?usp=drive_link&ouid=111803761008578493760&rtpof=true&sd=true
Questions used in the demo: https://docs.google.com/document/d/1dHmPn1B6ToKqUBVJXryT10oKqaaqcvAS/edit?usp=drive_link&ouid=111803761008578493760&rtpof=true&sd=true
The issue of unreliable Cypher query generation is seen as the biggest problem in the project
VM to inform the sponsors of the above and confirm with them [DONE: sponsors agree]
2024.04.19 Sub-team meeting
Passcode: PMik3&s%
Main objective: need to set-up a testing environment to systematically evaluate and improve LLM ability to generate Cypher queries
We will initially test GPT4 and Mistral (picks by the LLM selection team)
How is Cypher query generation testing done by the BioCypher team?
Results: Overview - BioChatter
Process: The Living Benchmark - BioChatter and Developer Guide - BioChatter
The focus on the repeat_instruction contents?
For the RFP, think on how to implement Abbvie/Jon’s method in this environment
Make sure that the English question, generated query, and the result are displayed for human evaluation
Propose a set of English questions that we will use (limited by the current OT contents in BioCypher) - Confirm with The Hyve that the questions in our list currently flagged as feasible are indeed such
Do we need to write “ideal” Cypher queries for these questions? - yes, make sure we understand the questions asked -this is a line item in the RFP
VM Fwd questions in column M to experts
Comment from Etzard: in his system elastic search is used across documents to by-pass the failing SPARQL queries generated by LLM, is this conceptually similar to the method proposed by Abbvie? VM shared the recording of the Jon’s talk.
In general, it is good for us to collect these and similar hacks that force LLMs to produce better queries
Action item for the team (all members): please think about any additional requirements that we need to include into the RFP
Note, if payments (beyond minimum cloud expenses reimbursement) are needed we must do a competitive RFP/RFQ
2024.05.03 Sub-team meeting
Recording:
Passcode: jm+.s6T.
There were internet connectivity failures during the call. Thus recording may be imperfect.
A testing environment to systematically evaluate and improve LLM ability to generate Cypher queries requires vendor support, and hiring a vendor requires an RFP. Vladimir informed the team about the upcoming RFP and the process for it
We will need to create "correct" answers to the scientific competency questions that we plan to use in the POC. ZS colleagues volunteered to perform this service. Details will be decided next week in a call between Vladimir and Bruce Press. This is FYI only, no action needed.
Participants observed that not all questions contained in our scientific competency question list can be answered based only on the information in Open Targets. For example, any questions that refer to clinical trials may not have complete answers based on the Open Targets contents alone. These issues will not have an effect on the POC project, however. In the future we may have to ask the project funders whether they would like to invest in data improvements or not.
We will have to brainstorm techniques for improvement of LLM performance in writing Cypher queries. (Or any structured query language).
A registry for proposed techniques that would contain high-level or pseudocode algorithmic descriptions of them: Techniques to improve LLM performance for structured query generation
Peter Dorr shared DOIs to papers that describe other techniques in this field - already captured in the brainstorming document
Peter Dorr agreed to organize a talk by his organization in one of our main team (Wednesday) meetings, where the methods developed by his company can be shared. Exact date TBD
2024.05.10 Sub-team meeting
Passcode: W^rVQ6W=
We agreed to focus on making edits and additions to the list of methods in Techniques to improve LLM performance for structured query generation
Action items:
For all team members: review the list of methods and add new ones, or note errors and omissions in the existing descriptions PLEASE LOOK INTO THIS
VM ask EPAM colleagues about the location of the code that performs the selection of the query template in method #1 - DONE, ALAS, CODE NOT AVAILABLE
VM confirm with Jon Stevens that the pseudo-code in method #3 is accurate - DONE, YES IT IS
Brian Evarts: has connections/colleagues who work on similar technologies, will copy the links
We agreed to skip the call on May 17th (VM will cancel it) and meet again on May 24th
2024.05.24 Sub-team meeting
Passcode: ZA!z3S5t
RFP was announced two days ago, are there any questions?
Q&A captured in recording
Clarified work scope (asked by Etzard)
Clarified the performance time period: initially, 3 months, but with an option to extend (asked by Etzard)
Clarified position on the expected cost proposal: it does exist, but is not published (asked by Rob)
We agreed to focus on making edits and additions to the list of methods in Techniques to improve LLM performance for structured query generation - updates:
Research: Brian Evarts: see GitHub - neo4j-labs/text2cypher: collection of text2cypher datasets, evaluations, and finetuning instructions as an additional possible method
Also, langsmith library for Langchain - Brian Evarts to look up more details
If we could add iterative learning to the process this may also help
NB: Similar ideas are explored by Dataiku: How to Build Tailored Enterprise Chatbots at Scale
Next meeting will be dedicated to RFP Q&A. We may cancel it in case there are no questions that merit live discussion.
2024.06.21 Sub-team meeting
Agenda:
Last chance for any Q&A about the RFP
Look at the specific business questions that we will use for testing, and the “true” answers for them: 180624_OT_Questions.docx (this work was done by the ZS team - many thanks!)
Recording:
https://pistoiaalliance-org.zoom.us/rec/share/j1s8mE9Ytq4aEkMHoYroEKfcx0ThzeArtvbkcmMrB9yqCoJBHMltBgP-hsjLpgWz.-pEWfzVBloiYxqbA?startTime=1718978573000
Passcode: P0@LM7E!