This is the subpage for the LLM Selection sub-team: Jon Stevens, Etzard Stolte, Helena Deus; Brian Evarts; Wouter Franke, Matthijs van der Zee;
Notes from the January 24th general PM call:
08:40:27 From Brian Evarts (CPT) to Everyone:
Has anyone tried QLORA or other Quantization techniques for fine tuning?
08:42:05 From stevejs to Everyone:
@Brian we had a QLORA fine-tuned llama2 model that we fine-tuned to increase the sequence length. Quality was OK, but we haven’t used it in production because the model was pretty beefy and we need more infra to increase the speed of the model
Notes from the January 26th small team call:
Recording: https://pistoiaalliance-org.zoom.us/rec/share/0hrNcFRs8SWpojuIySGTrObp7Q_3HB-n_2lxaMKbXDA9tH_dGQ2VqRf0NvLaytl1.UEZyKTY0mUeOhFwc
Passcode: 4=UyzhM$Transcript: https://pistoiaalliance-org.zoom.us/rec/share/lvJ6tFaEXpApRatSufu9O3KnS0uyDDM3Ojdu3ceCIpXngtSdnm7MglEAIRFP_fGW.pLfIg7Ka3u7gK1KG
Passcode: 4=UyzhM$Private brainstorming document is at: https://docs.google.com/document/d/1ip5vmGuRXVey1Ml_uiSUURAUf-a6KMCepDC_ovrds4M/edit?usp=sharing
List of candidate LLMs with evaluation criteria: https://docs.google.com/spreadsheets/d/1muOE2zweNl9LvW1yIsUcJRy3gGTDe2C_/edit?usp=sharing&ouid=111803761008578493760&rtpof=true&sd=true
Notes from February 1st small team call:
Recording https://pistoiaalliance-org.zoom.us/rec/share/3vT1H30cX_zFgEUtJ828Spj158rN2oqOssBNDc6hC1mti2TQ5G-uxdoBZkN7I8GQ.xtvwbqf---G-qnut?startTime=1706799805000
Passcode: vW*uB7^2Transcript: https://pistoiaalliance-org.zoom.us/rec/share/dcrTHAezwaqAQtBPJMUSgGAb5LUS8lTiqJquis3yeyo6U6SgTvPVk6dDZ0K6oNIU.7s_B5JDf98QIHXz1
Passcode: vW*uB7^2The main action item is to add information to the list of candidate LLMs: https://docs.google.com/spreadsheets/d/1muOE2zweNl9LvW1yIsUcJRy3gGTDe2C_/edit?usp=sharing&ouid=111803761008578493760&rtpof=true&sd=true
Notes from February 15th small team call:
Recording: https://pistoiaalliance-org.zoom.us/rec/share/kwJaJdaq_HSBP9ZrVL1DPWME3ufPxeyl3I2CBkr-A9cWwSgounfMg8NlaGjrGkry.HbzA8n0Oxo88oXQi
Passcode: @6UEvs7^Transcript: https://pistoiaalliance-org.zoom.us/rec/share/1Ihp3towQE8NXeHp4-EANWwHeDhNHCIjaU7RVSv-uZRd0XWL1YY3nrGHssIEOwR2.d4qyWVgOutxa68Bt
Passcode: @6UEvs7^Warning: BioCypher may not be W3C compliant, and needs discussion in the large team before adoption - or consider alternatives - so far this is the most important question.
This team cannot make progress until we make the decision about BioCypher
Focus on smaller, cheaper models first? Pick a handful of models, at various size points, look up performance on general benchmarks
What is the task → that dictates the choice of the benchmarks
Verify that BioChatter has benchmarks for writing cypher queries
How important is each benchmark? Perhaps create a linear model that combines multiple scores into a single score
Helena: This benchmark answers the question “what are the best embeddings” across a variety of tasks: https://huggingface.co/spaces/mteb/leaderboard
Convert into a weekly call at the same time on Thursdays for the next six weeks
Notes from February 22nd small team call:
Recording: https://pistoiaalliance-org.zoom.us/rec/share/yre8AA5Hu32j-Wq-BAcW7Ksh7lHqZ1HfhYl43ULyD-elsE4vRRKEjSeXDbFtJw6M.E4S1t2UNdBIFhHaf
Passcode: *CRmXi.2Transcript: https://pistoiaalliance-org.zoom.us/rec/share/c7dwf9Eoj0OyOFnvijJvtHQTs0oVXT8YXAIglD2SplFVaTEVY24RRnEkbZmbNiru.rNaPGc2EXZ064po-
Passcode: *CRmXi.2See notes in: https://docs.google.com/document/d/1ip5vmGuRXVey1Ml_uiSUURAUf-a6KMCepDC_ovrds4M/edit?usp=sharing
Notes from March 7th small team call:
All models have been assigned. Please complete the details for the models assigned to you in this spreadsheet
Recording: https://pistoiaalliance-org.zoom.us/rec/share/2q1gdEsLpAgq52MUrBKAKmgtYBAvi8H8RNk-IoHtR5xzvEQm7zAh-NPEu1-dv5AV.nLMsRe1EBCrTZmuK Passcode: onhG57v%
Transcript: https://pistoiaalliance-org.zoom.us/rec/share/eAaotihtghO2xSoXI-LnT9nZV4LKWwpOgAgLddt9sD1plV_ka8GpovRjcxpceVYU.JHQ3noLEXpoDnzr1 Passcode: onhG57v%
See notes in: https://docs.google.com/document/d/1ip5vmGuRXVey1Ml_uiSUURAUf-a6KMCepDC_ovrds4M/edit?usp=sharing
Notes from March 14th small team call:
This call was short and not recorded
The remaining items in the LLM comparison table are costs for the Llama models (Brian to look up) and the performance figures on BioCypher (here we are dependent on Sebastian and may have to wait)
There is an expectation, based on team members' work experiences on other projects, that fine-tuning of open-source models may be heavily dependent on use case and may not be cost-effective
In that case GPT4 would win
Notes from March 21st small team call:
Recording: https://pistoiaalliance-org.zoom.us/rec/share/tNABZ4XV54gHrEGC4O2aZsxA1UVm6qLlblc3pfGSOKDG8Hwv9cTt4BzRjybAlR_4.-3DaDIOxI2QHusxr Passcode: 8FhD=wtj
Transcript: https://pistoiaalliance-org.zoom.us/rec/share/dEvIc4DaaaxaLr7qW7iapr8cnWlezudOdQXW2LPIzPQmI8nwqoKRM95EJ2VtW3Jm.5f4X7Hwa2GxxrvH6 Passcode: 8FhD=wtj
Focus on assigning relative weights. It seems that the most important categories are accuracy (on the dimensions of generating queries and writing plain text answers based on structured input), which in turn requires awareness of the biological terminology; then whether the model is open-source or not; and finally the cost. The other factors are seen as co-linear with these.
Homework: please review the spreadsheet and suggest values for the weights
Homework: action item for Brian: please add information in your columns in the spreadsheet [DONE]
New risk identified: some proprietary LLMs, such as ChatGPT, are censored by their authors. This means that in answering of scientific questions they may produce uncontrollable bias. This is a strong argument in favor of uncensored, open-source LLMs.
Based upon discussion today we’d have to take back the statement from the last week that given all equal ChatGPT 4 would win.
Notes from March 28th small team call:
Recording: https://pistoiaalliance-org.zoom.us/rec/share/oxGWla7rTcksvYfxMt0NrepIGltJxS6aYo-UUeN5dYQ21F8rNr8IW9LLNQCO-T-Y.OyMU-x9CAyXjcieU Passcode: uwwr&H5A
Transcript: https://pistoiaalliance-org.zoom.us/rec/share/mQT2t0Z0mbcIr8Yq0y1sqsoeo_nByZoTWPw8EwubZDxihARk5mgT8D-Gk_1IYG0a.AjRl0-VIFJyQKNWw Passcode: uwwr&H5A
Prompt size may be important, and we increased its weight in the comparison table
Preferred architecture would allow for swapping of LLMs
Censorship is most likely already included in the performance scores - this thought discounts the censorship risk
Given that not all scores are available, we may end up having to do our own evaluation
Consider hosting platforms for open-source models (Amazon Bedrock) instead of renting servers at AWS
Preference for hosted models with per-per-token
Add this dimension to the spreadsheet ACTION for Jon Stevens
Review rankings - ACTION for Brian and Etzard