The LLM produces output (hypotheses, a final report, significant events, etc) after it has completed its analysis. We should allow the user to interact with some portion of the LLM output (the hypotheses, likely) and use the edited input to re-run the analysis, similar to how the AI assistant allows this in chat.
We will need more detail here on how this is intended to work.
The LLM produces output (hypotheses, a final report, significant events, etc) after it has completed its analysis. We should allow the user to interact with some portion of the LLM output (the hypotheses, likely) and use the edited input to re-run the analysis, similar to how the AI assistant allows this in chat.
We will need more detail here on how this is intended to work.