allenporter / k8s-gitops

Flux/Gitops managed k8s cluster
33 stars 1 forks source link

Update ghcr.io/allenporter/llama-cpp-server-model-fetch Docker tag to v2.15.0 #1850

Closed renovate[bot] closed 4 months ago

renovate[bot] commented 4 months ago

Mend Renovate

This PR contains the following updates:

Package Update Change
ghcr.io/allenporter/llama-cpp-server-model-fetch minor v2.12.0 -> v2.15.0

Release Notes

allenporter/llama-cpp-server (ghcr.io/allenporter/llama-cpp-server-model-fetch) ### [`v2.15.0`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.15.0) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.14.1...v2.15.0) #### What's Changed - Update dependency llama_cpp_python to v0.2.75 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/71](https://togithub.com/allenporter/llama-cpp-server/pull/71) **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.14.1...v2.15.0 ### [`v2.14.1`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.14.1) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.13.0...v2.14.1) **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.14.0...v2.14.1 ### [`v2.13.0`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.13.0) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.12.0...v2.13.0) #### What's Changed - Update dependency llama_cpp_python to v0.2.63 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/61](https://togithub.com/allenporter/llama-cpp-server/pull/61) - Update dependency fastapi to v0.110.2 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/60](https://togithub.com/allenporter/llama-cpp-server/pull/60) - Update dependency llama_cpp_python to v0.2.64 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/62](https://togithub.com/allenporter/llama-cpp-server/pull/62) - Update dependency llama_cpp_python to v0.2.65 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/63](https://togithub.com/allenporter/llama-cpp-server/pull/63) - Update dependency fastapi to v0.110.3 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/64](https://togithub.com/allenporter/llama-cpp-server/pull/64) - Update dependency llama_cpp_python to v0.2.68 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/65](https://togithub.com/allenporter/llama-cpp-server/pull/65) - Update dependency llama_cpp_python to v0.2.69 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/66](https://togithub.com/allenporter/llama-cpp-server/pull/66) - Update dependency fastapi to v0.111.0 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/67](https://togithub.com/allenporter/llama-cpp-server/pull/67) - Update dependency llama_cpp_python to v0.2.73 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/68](https://togithub.com/allenporter/llama-cpp-server/pull/68) - Update dependency cmake to v3.29.3 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/69](https://togithub.com/allenporter/llama-cpp-server/pull/69) - Update dependency llama_cpp_python to v0.2.74 by [@​renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/70](https://togithub.com/allenporter/llama-cpp-server/pull/70) **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.12.0...v2.13.0

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR has been generated by Mend Renovate. View repository job log here.

github-actions[bot] commented 4 months ago
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-clblast

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-clblast

@@ -45,13 +45,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.12.0
+              tag: v2.15.0
         strategy: Recreate
     ingress:
       main:
         annotations:
           cert-manager.io/cluster-issuer: letsencrypt
         enabled: true
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas

@@ -48,13 +48,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.12.0
+              tag: v2.15.0
         strategy: Recreate
     defaultPodOptions:
       runtimeClassName: nvidia
     ingress:
       main:
         annotations:
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

@@ -45,13 +45,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.12.0
+              tag: v2.15.0
         strategy: Recreate
     ingress:
       main:
         annotations:
           cert-manager.io/cluster-issuer: letsencrypt
         enabled: true
github-actions[bot] commented 4 months ago
--- HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

+++ HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

@@ -35,13 +35,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.12.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.15.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true
--- HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas

+++ HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas

@@ -36,13 +36,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.12.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.15.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true
--- HelmRelease: llama/llama-clblast Deployment: llama/llama-clblast

+++ HelmRelease: llama/llama-clblast Deployment: llama/llama-clblast

@@ -35,13 +35,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.12.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.15.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true