...
All models have been assigned. Please complete the details for the models assigned to you in this spreadsheet
Recording: https://pistoiaalliance-org.zoom.us/rec/share/2q1gdEsLpAgq52MUrBKAKmgtYBAvi8H8RNk-IoHtR5xzvEQm7zAh-NPEu1-dv5AV.nLMsRe1EBCrTZmuK Passcode: onhG57v%
Transcript: https://pistoiaalliance-org.zoom.us/rec/share/eAaotihtghO2xSoXI-LnT9nZV4LKWwpOgAgLddt9sD1plV_ka8GpovRjcxpceVYU.JHQ3noLEXpoDnzr1 Passcode: onhG57v%
See notes in: https://docs.google.com/document/d/1ip5vmGuRXVey1Ml_uiSUURAUf-a6KMCepDC_ovrds4M/edit?usp=sharing
Notes from March 14th small team call:
This call was short and not recorded
The remaining items in the LLM comparison table are costs for the Llama models (Brian to look up) and the performance figures on BioCypher (here we are dependent on Sebastian and may have to wait)
There is an expectation, based on team members' work experiences on other projects, that fine-tuning of open-source models may be heavily dependent on use case and may not be cost-effective
In that case GPT4 would win