UdaraJay / Pile

Desktop app for digital journaling.
https://udara.io/pile
MIT License
1.65k stars 89 forks source link

Ideas around integrating local LLM #6

Open paramaggarwal opened 8 months ago

paramaggarwal commented 8 months ago

Summary

Currently we expect customers to figure out how to get an OpenAI API key and then they need to configure billing etc over there. Plus people might not be comfortable sending their thoughts to OpenAI servers. If we could run a local LLM on the device itself then we could actively generate reflections without having to click a button explicitly.

One option that I feel can be very straightforward is this: https://ollama.ai/ and they have even documented this use-case here: https://ollama.ai/blog/llms-in-obsidian

MSR-07 commented 7 months ago

Running a local language model on a device does indeed offer advantages in terms of privacy and ease of use, especially for generating reflections without relying on external servers or requiring users to manage API keys and billing configurations. Services like Ollama.ai provide a promising avenue for deploying language models locally, which can be particularly advantageous in scenarios where users prioritize privacy or prefer not to rely on cloud-based solutions.

The ability to generate reflections seamlessly without explicit manual input could significantly enhance user experience and streamline the process of using language models for various applications, such as note-taking, idea generation, or personal reflection.

The use case you mentioned with Obsidian demonstrates the potential of integrating local language models with existing productivity tools, offering users the opportunity to enhance their workflows and creativity within familiar environments.

Overall, leveraging local language models through platforms like Ollama.ai presents a compelling option for providing users with on-device AI capabilities while addressing concerns around privacy and simplifying the user experience.

Kenan7 commented 6 months ago

As I see Pile uses LlamaIndex and it does not support on-premise LLMs.

image

Would you consider using a different library? what is your suggestion for working around this? @UdaraJay

It would be amazing to implement this, most people already are running their llms local.

balamenon commented 2 months ago

@Kenan7 I can see that LlamaIndex supports Ollama now. So, maybe this can be revisited and looked at. Would be great to spin up Ollama with a default model and access it on Ollama's port (11234?). Not much configuration required.

https://github.com/run-llama/LlamaIndexTS/blob/main/packages/core/src/llm/ollama.ts