Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This is the subpage for the LLM Selection sub-team: Jon Stevens, Etzard Stolte, Helena Deus; Brian Evarts; Wouter Franke, Matthijs van der Zee;

Notes from the January 24th general PM call:

08:40:27 From Brian Evarts (CPT) to Everyone:
Has anyone tried QLORA or other Quantization techniques for fine tuning?
08:42:05 From stevejs to Everyone:
@Brian we had a QLORA fine-tuned llama2 model that we fine-tuned to increase the sequence length. Quality was OK, but we haven’t used it in production because the model was pretty beefy and we need more infra to increase the speed of the model

Notes from the January 26th small team call:

Notes from February 1st small team call:

Notes from February 15th small team call:

Notes from February 22nd small team call:

Notes from March 7th small team call:

Notes from March 14th small team call:

  • This call was short and not recorded

  • The remaining items in the LLM comparison table are costs for the Llama models (Brian to look up) and the performance figures on BioCypher (here we are dependent on Sebastian and may have to wait)

  • There is an expectation, based on team members' work experiences on other projects, that fine-tuning of open-source models may be heavily dependent on use case and may not be cost-effective

  • In that case GPT4 would win

Notes from March 21st small team call:

Notes from March 28th small team call:

  • No labels