To help users learn how to use Starsim, or to migrate between major versions, would be nice to have a multi-agent RAG LLM copilot.
First use case is to migrate from Starsim v1.0 to v2.0. Conceptually I'm imagining it would work as follows:
Input Starsim v1.0 source code and examples as context (or as a RAG)
Likewise with Starsim v2.0 source code
Select a file or files to translate/migrate from v1.0 to v2.0
Perform the migration
Optionally: have a set of expected outputs from v1.0 code, and have a different agent check that v2.0 outputs match (or are close to matching) these.
Imagining an API like:
import llm_migrate as llmm
llm = llmm.create(model='some-big-LLM-like-gpt-4-32k') # Connect to e.g. OpenAI API
llm.set_context_from('/path/to/old/code') # Set context/RAG for the "old" (v1.0) version
llm.set_context_to('/path/to/new/code') # Set context/RAG for the "new" (v2.0) version
llm.set_migrate('/path/to/files/to/migrate') # Select the folder/files to migrate
llm.set_target('/path/to/expected/outputs') # Optional; select the folder with output for validation
llm.migrate() # Perform the migration
llm.validate() # Optional; validate the output of the migrated files against the target outputs
To help users learn how to use Starsim, or to migrate between major versions, would be nice to have a multi-agent RAG LLM copilot.
First use case is to migrate from Starsim v1.0 to v2.0. Conceptually I'm imagining it would work as follows:
Imagining an API like: