Closed renovate[bot] closed 3 months ago
--- kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas
+++ kubernetes/ml/prod Kustomization: flux-system/ml HelmRelease: llama/llama-cublas
@@ -34,13 +34,13 @@
value: /data/models
- name: CONFIG_FILE
value: /config/model-config.json
image:
pullPolicy: IfNotPresent
repository: ghcr.io/allenporter/llama-cpp-server-cuda
- tag: v2.17.0
+ tag: v2.18.0
resources:
limits:
nvidia.com/gpu: 1
initContainers:
download-model:
env:
--- HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas
+++ HelmRelease: llama/llama-cublas Deployment: llama/llama-cublas
@@ -52,13 +52,13 @@
containers:
- env:
- name: MODEL_DIR
value: /data/models
- name: CONFIG_FILE
value: /config/model-config.json
- image: ghcr.io/allenporter/llama-cpp-server-cuda:v2.17.0
+ image: ghcr.io/allenporter/llama-cpp-server-cuda:v2.18.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 0
periodSeconds: 10
tcpSocket:
This PR contains the following updates:
v2.17.0
->v2.18.0
Release Notes
allenporter/llama-cpp-server (ghcr.io/allenporter/llama-cpp-server-cuda)
### [`v2.18.0`](https://togithub.com/allenporter/llama-cpp-server/releases/tag/v2.18.0) [Compare Source](https://togithub.com/allenporter/llama-cpp-server/compare/v2.17.0...v2.18.0) #### What's Changed - Update dependency uvicorn to v0.30.0 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/73](https://togithub.com/allenporter/llama-cpp-server/pull/73) - Update dependency transformers to v4.41.2 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/74](https://togithub.com/allenporter/llama-cpp-server/pull/74) - Update dependency uvicorn to v0.30.1 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/75](https://togithub.com/allenporter/llama-cpp-server/pull/75) - Update dependency pydantic-settings to v2.3.0 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/76](https://togithub.com/allenporter/llama-cpp-server/pull/76) - Update dependency llama_cpp_python to v0.2.77 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/77](https://togithub.com/allenporter/llama-cpp-server/pull/77) - Update dependency pydantic-settings to v2.3.1 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/78](https://togithub.com/allenporter/llama-cpp-server/pull/78) - Update dependency cmake to v3.29.5 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/79](https://togithub.com/allenporter/llama-cpp-server/pull/79) - Update dependency llama_cpp_python to v0.2.78 by [@renovate](https://togithub.com/renovate) in [https://github.com/allenporter/llama-cpp-server/pull/80](https://togithub.com/allenporter/llama-cpp-server/pull/80) **Full Changelog**: https://github.com/allenporter/llama-cpp-server/compare/v2.17.0...v2.18.0Configuration
📅 Schedule: Branch creation - "every weekend" in timezone America/Los_Angeles, Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.