Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

This is the subpage for the LLM Selection sub-team: Jon Stevens, Etzard Stolte, Helena Deus; Brian Evarts; Wouter Franke, Matthijs van der Zee;

Notes from the January 24th general PM call:

08:40:27 From Brian Evarts (CPT) to Everyone:
Has anyone tried QLORA or other Quantization techniques for fine tuning?
08:42:05 From stevejs to Everyone:
@Brian we had a QLORA fine-tuned llama2 model that we fine-tuned to increase the sequence length. Quality was OK, but we haven’t used it in production because the model was pretty beefy and we need more infra to increase the speed of the model

Notes from the January 26th small team call:

Notes from February 1st small team call:

Notes from February 15th small team call:

  • Warning: BioCypher may not be W3C compliant, and needs discussion in the large team before adoption - or consider alternatives - so far this is the most important question.

    • This team cannot make progress until we make the decision about BioCypher

  • Focus on smaller, cheaper models first? Pick a handful of models, at various size points, look up performance on general benchmarks

  • What is the task → that dictates the choice of the benchmarks

  • Verify that BioChatter has benchmarks for writing cypher queries

  • How important is each benchmark? Perhaps create a linear model that combines multiple scores into a single score

  • Helena: This benchmark answers the question “what are the best embeddings” across a variety of tasks: https://huggingface.co/spaces/mteb/leaderboard

  • Convert into a weekly call at the same time on Thursdays for the next six weeks

  • No labels