...
2024.02.07 Recording (Passcode: L58@v7Dg) Slides | Slides from the talk by Sebastian Lobentanzer
2024.03.20 Recording (Passcode: LZ!jZT4z) Slides | Architecture diagram in Draw.io | Architecture diagram PNG file
2024.04.17 Recording (Passcode: Yn2!5qJK) Slides | Slides from the talk by Jon Stevens
2024.08.07 Recording (Passcode: %.1&ukfM) Slides | Includes a talk by Peter Dorr: SPARQL query code generation with LLMs
2024.09.04 Recording (Passcode: t3?B*?CX) Slides | Includes a talk by Oleg Stroganov on agents controlling the actions of LLMs | Slides from the talk by Oleg Stroganov
2024.11.12 Email communication: Slides from the talk report by Oleg Stroganov
Lessons Learned
The highest risk item is generation of the structured query (Cyphrer or SPARQL) from a plain English request. Some publications estimate success rate of about 48% on the first attempt.
The structure of the database used for queries matters. LLMs can easier produce meaningful structured queries for databases with flat, simple structure.
Practically useful system requires filtering or secondary mining of output in addition to natural language narration.
It is extremely important to implement a reliable named entity recognition system. The same acronym can refer to completely different entities, which can be differentiated either from the context (hard) or by asking clarifying questions. Must also map synonyms. Without these measures naïve queries in a RAG environment will fail.
LLMs may produce different structured queries starting from the same natural language question. These queries may be semantically and structurally correct, but may include assumptions on the limit of the number of items to return, or order, or lack of these. These variations are not deterministic. As a result on different execution rounds the same natural language may result in different answers. It is necessary to explicitly formulate the limits, order restrictions, and other parameters when asking the question, or to determine the user’s intentions in a conversation with a chain of thought. A question related to this topic, is whether specifics in the implementation of usual RAG models with a vector database may introduce implicit restrictions on what data is explored by the LLM and what data is not, and thus artificially limit the answers. This may be happening without the user knowing the restrictions (and perhaps even without the system’s authors knowing that they introduced such restrictions embedded in the specifics of the system architecture).
...