iyaja / llama-fs

A self-organizing file system with llama 3
MIT License
4.49k stars 259 forks source link

This is NOT running ollama, privacy issue #24

Open Bardo-Konrad opened 1 month ago

Bardo-Konrad commented 1 month ago

When running incognito, why do I get groq.RateLimitError?

groq.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for model llama3-70b-8192 in organization ... on tokens per minute (TPM): Limit 6000, Used 0, Requested ~24996. Please try again in 3m9.96s. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}

barakplasma commented 2 weeks ago

I also was a bit misled by the README, which states, "For local processing, we integrated Ollama running the same model to ensure privacy in incognito mode" https://github.com/iyaja/llama-fs/blob/1b4608545e6b3a0097b50241956b90e9e648c94f/README.md?plain=1#L29

There is only an implementation for handling text files using groq; there is no implementation currently for using ollama https://github.com/iyaja/llama-fs/blob/1b4608545e6b3a0097b50241956b90e9e648c94f/src/loader.py#L189-L195

There is a hardcoded groq api key, and it doesn't work anymore.

Bardo-Konrad commented 2 weeks ago

I assume malicious intent

areibman commented 2 weeks ago

I assume malicious intent

Not malicious-- just lazy lol. This was a hackathon project, and we swapped out Ollama for Groq because it was much faster. Works fine with Ollama though.

We don't really have the time to fix this ourselves, but if anyone raises a PR we'll gladly merge!

barakplasma commented 2 weeks ago

@areibman and @Bardo-Konrad please take a look at #44 which is a PR which is supposed to fix this issue. But please test it / take a good look at it; I didn't test it enough