This is the subpage for the LLM Selection sub-team: Jon Stevens, Etzard Stolte, Helena Deus; Brian Evarts; Wouter Franke, Matthijs van der Zee;
Notes from the January 24th PM call:
08:40:27 From Brian Evarts (CPT) to Everyone:
Has anyone tried QLORA or other Quantization techniques for fine tuning?
08:42:05 From stevejs to Everyone:
@Brian we had a QLORA fine-tuned llama2 model that we fine-tuned to increase the sequence length. Quality was OK, but we haven’t used it in production because the model was pretty beefy and we need more infra to increase the speed of the model
Private brainstorming document is at: https://docs.google.com/document/d/1ip5vmGuRXVey1Ml_uiSUURAUf-a6KMCepDC_ovrds4M/edit?usp=sharing