allenporter / k8s-gitops

Flux/Gitops managed k8s cluster
33 stars 1 forks source link

Update ghcr.io/allenporter/llama-cpp-server-model-fetch Docker tag to v2.18.0 #1879

Closed renovate[bot] closed 3 months ago

renovate[bot] commented 3 months ago

Mend Renovate

This PR contains the following updates:

Package Update Change
ghcr.io/allenporter/llama-cpp-server-model-fetch minor v2.17.0 -> v2.18.0

Release Notes

allenporter/llama-cpp-server (ghcr.io/allenporter/llama-cpp-server-model-fetch) ### [`v2.18.0`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.18.0) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.17.0...v2.18.0) #### What's Changed - Update dependency uvicorn to v0.30.0 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/73](https://togithub.com/allenporter/llama-cpp-server/pull/73) - Update dependency transformers to v4.41.2 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/74](https://togithub.com/allenporter/llama-cpp-server/pull/74) - Update dependency uvicorn to v0.30.1 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/75](https://togithub.com/allenporter/llama-cpp-server/pull/75) - Update dependency pydantic-settings to v2.3.0 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/76](https://togithub.com/allenporter/llama-cpp-server/pull/76) - Update dependency llama_cpp_python to v0.2.77 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/77](https://togithub.com/allenporter/llama-cpp-server/pull/77) - Update dependency pydantic-settings to v2.3.1 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/78](https://togithub.com/allenporter/llama-cpp-server/pull/78) - Update dependency cmake to v3.29.5 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/79](https://togithub.com/allenporter/llama-cpp-server/pull/79) - Update dependency llama_cpp_python to v0.2.78 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/80](https://togithub.com/allenporter/llama-cpp-server/pull/80) **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.17.0...v2.18.0

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR has been generated by Mend Renovate. View repository job log here.

github-actions[bot] commented 3 months ago
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas

@@ -48,13 +48,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf,https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf,https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q4_K_M.gguf,https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.17.0
+              tag: v2.18.0
         strategy: Recreate
     defaultPodOptions:
       runtimeClassName: nvidia
     ingress:
       main:
         annotations:
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

@@ -45,13 +45,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.17.0
+              tag: v2.18.0
         strategy: Recreate
     ingress:
       main:
         annotations:
           cert-manager.io/cluster-issuer: letsencrypt
         enabled: true
github-actions[bot] commented 3 months ago
--- HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

+++ HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

@@ -35,13 +35,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.17.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.18.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true
--- HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas

+++ HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas

@@ -36,13 +36,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf,https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf,https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct.Q4_K_M.gguf,https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.17.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.18.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true