As requested by a member of the community, it would be cool to implement a new feature for splitting documents using an LLM nstead of our current token or delimiter-based methods. This will allow for more intelligent and context-aware splitting of documents.
Proposed Idea
Implement an LLM "scan" operation that can process a document and determine contiguous splits based on specified criteria.
Allow users to provide a split_criteria_prompt that describes how to split the document (e.g., by topic).
Use a scratchpad technique (similar to our reduce operation) to manage internal state/memory while splitting.
Technical Approach
Feed as much text as possible into the LLM.
Ask the LLM to output:
As many split points as it's confident in (phrases of 5-10 tokens that we can search in the document to split)
Any memory/state it wants to keep track of for splitting the next part of the document
Remove processed chunks from the document.
Repeat the process until the entire document is processed.
Considerations
Splitting strategy:
All splits in one call
One split at a time
K splits at a time
As many splits as the LLM can confidently provide
Balancing split quality with processing efficiency
Handling very large documents that exceed LLM context limits
Ensuring consistency in splitting criteria across multiple LLM calls
As requested by a member of the community, it would be cool to implement a new feature for splitting documents using an LLM nstead of our current token or delimiter-based methods. This will allow for more intelligent and context-aware splitting of documents.
Proposed Idea
split_criteria_prompt
that describes how to split the document (e.g., by topic).Technical Approach
Considerations
Proposed Interface Design