allenporter / k8s-gitops

Flux/Gitops managed k8s cluster
32 stars 1 forks source link

Update ghcr.io/allenporter/llama-cpp-server-model-fetch Docker tag to v2.17.0 #1866

Closed renovate[bot] closed 1 month ago

renovate[bot] commented 1 month ago

Mend Renovate

This PR contains the following updates:

Package Update Change
ghcr.io/allenporter/llama-cpp-server-model-fetch minor v2.16.0 -> v2.17.0

Release Notes

allenporter/llama-cpp-server (ghcr.io/allenporter/llama-cpp-server-model-fetch) ### [`v2.17.0`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.17.0) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.16.0...v2.17.0) ### Changes - Adds transformers dependency needed by functionary **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.16.0...v2.17.0

Configuration

📅 Schedule: Branch creation - "every weekend" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

â™» Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR has been generated by Mend Renovate. View repository job log here.

github-actions[bot] commented 1 month ago
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-openblas

@@ -45,13 +45,13 @@

               value: /data/models
             - name: MODEL_URLS
               value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
             image:
               pullPolicy: IfNotPresent
               repository: ghcr.io/allenporter/llama-cpp-server-model-fetch
-              tag: v2.16.0
+              tag: v2.17.0
         strategy: Recreate
     ingress:
       main:
         annotations:
           cert-manager.io/cluster-issuer: letsencrypt
         enabled: true
github-actions[bot] commented 1 month ago
--- HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

+++ HelmRelease: llama/llama-openblas Deployment: llama/llama-openblas

@@ -35,13 +35,13 @@

       initContainers:
       - env:
         - name: MODEL_DIR
           value: /data/models
         - name: MODEL_URLS
           value: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf,https://huggingface.co/chanwit/flux-7b-v0.1-gguf/resolve/main/flux-7b-v0.1-Q4_K_M.gguf,https://huggingface.co/meetkai/functionary-7b-v1.4-GGUF/resolve/main/functionary-7b-v1.4.q4_0.gguf,https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf
-        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.16.0
+        image: ghcr.io/allenporter/llama-cpp-server-model-fetch:v2.17.0
         imagePullPolicy: IfNotPresent
         name: download-model
         volumeMounts:
         - mountPath: /config/model-config.json
           name: config
           readOnly: true