...
LLM team members, please remind me of what open-source LLM we picked (in addition to GPT4)
VM will look up in recording
How is Cypher query generation testing done by the BioCypher team?
Results: https://biochatter.org/benchmark/
Process: https://biochatter.org/benchmarking/ and https://biochatter.org/benchmark-developer/
The focus on the repeat_instruction contents?
For the RFP, think on how to implement Abbvie/Jon’s method in this environment
Make sure that the English question, generated query, and the result are displayed for human evaluation
Propose a set of English questions that we will use (limited by the current OT contents in BioCypher) - Confirm with The Hyve that the questions in our list currently flagged as feasible are indeed such
Do we need to write “ideal” Cypher queries for these questions? - yes, make sure we understand the questions asked -this is a line item in the RFP
VM Fwd questions in column M to experts
Comment from Etzard: in his system elastic search is used across documents to by-pass the failing SPARQL queries generated by LLM, is this conceptually similar to the method proposed by Abbvie?
Who will perform the work?
If VM shared the recording of the Jon’s talk; ask for the slides
In general, it is good for us to collect these and similar hacks that force LLMs to produce better queries
Action item for the team (all members): please think about any additional requirements that we need to include into the RFP
Note, if payments (beyond minimum cloud expenses reimbursement) are needed we must do an a competitive RFP/RFQ