Closed kbarto323 closed 5 months ago
I removed the setup in the docker entrypoint and added a new variable RUN_SETUP to be able to run the setup again if needed. default to false. wait for the gitaction and try it out
Perfect, thank you!
On Tue, May 14, 2024, 5:48 AM 3x3cut0r @.***> wrote:
I removed the setup in the docker entrypoint and added a new variable RUN_SETUP to be able to run the setup again if needed. default to false. wait for the gitaction and try it out
— Reply to this email directly, view it on GitHub https://github.com/3x3cut0r/docker/issues/11#issuecomment-2109998965, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABM2JCAB3S6HM52IFRXRJS3ZCH2YDAVCNFSM6AAAAABHTJ75SSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBZHE4TQOJWGU . You are receiving this because you authored the thread.Message ID: @.***>
Does running locally with Ollama need a separate tokenizer? Each time the container starts it downloads: privategpt | Downloading LLM mistral-7b-instruct-v0.2.Q4_K_M.gguf