Open Tortoise17 opened 8 months ago
I'm not sure what you want to achieve. Could you might add an example of what your app/project should do and how WebLLM could help you?
I want to achieve the llm run off line and also want to make it work to gain a search optimization from internal data and from llm vectorized inter linking.
Something like this . search system optimization but not for code base only. for the database search optimization locally which work together with llm and local folder coordination!!
I'm not fully convinced I get your question. But from the link you provided it seems you want to use an LLM to make sense of all the data in a local folder so you can ask questions or complete tasks based on that data, right?
So the idea of WebLLM is that you can run Large Language Models in the browser. So either you provide a Model that is finetuned for your usecase (trained with the data you need) or you put the information in the prompt (context).
Since WebLLM runs as a browser application it has very limited access to file and folder structures on your device. You could take a look at the File System Access API. So you could prepare the needed context to add it to your prompt. But it from my understanding it won't be able to just remember all the necessary information and then generate the Text you would expect.
yes, you understood quite close to what I want to implement. make sense of input search and already available data in folders locally and on customized demand interlinking and making relationships as for search optimization and not for text generation. Something like this.
Is this possible to use this pipeline to search locally offline inside the system from specific folders and paths?? Is the setup deployment possible? Please if you can guide me.