Closed aleksasiriski closed 1 year ago
I have no actual idea on what the defaults should be and that's why there aren't any set by default.
But how do I set my own limits?
Also the above mentioned limits were set as a result of inspecting usage when watching a video with kubectl top
But how do I set my own limits?
Like you did previously and upgrading to the newer chart, eg the resources definition is right, but it wasn't exposed before.
I'll suggest some defaults: Backend: max - 1GB to 1.5GB (the heap size is limited to 1GB, however the RSS memory of the jvm can be higher) Proxy: max 300-500MB min 32MB (it can use a lot of ram if there's a lot of users) Frontend: max 128MB min 32MB
I've edited the config above with suggested changes.
@samip5 It's not working with this config with chart 2.0.2:
frontend:
replicas: 2
resources:
requests:
cpu: 32m
memory: 32Mi
limits:
cpu: 128m
memory: 128Mi
env:
BACKEND_HOSTNAME: api.piped.mydomain.com
backend:
replicas: 2
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1500m
memory: 1500Mi
config:
PORT: 8080
NUM_WORKERS: 2
PROXY_PART: https://proxy.piped.mydomain.com
API_URL: https://api.piped.mydomain.com
FRONTEND_URL: https://piped.mydomain.com
COMPROMISED_PASSWORD_CHECK: true
DISABLE_REGISTRATION: true
database:
connection_url: jdbc:postgresql://cloudnative-pg-rw.cnpg-system.svc.cluster.local:5432/piped
driver_class: org.postgresql.Driver
dialect: org.hibernate.dialect.PostgreSQLDialect
username: piped
password: mypipeddbpassword
ytproxy:
replicas: 2
resources:
requests:
cpu: 32m
memory: 32Mi
limits:
cpu: 500m
memory: 500Mi
ingress:
main:
enabled: false
backend:
enabled: false
ytproxy:
enabled: false
postgresql:
enabled: false
replicas
works without issue, but requests and limits are nowhere to be found with kubectl describe
Containers:
piped-backend:
Container ID: containerd://cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image: 1337kavin/piped:latest
Image ID: docker.io/1337kavin/piped@sha256:60802ef9685281955015d7c301ff87b5d2163657bbbf94353bcbbf6d784226ee
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 18 Apr 2023 11:54:32 +0200
Ready: True
Restart Count: 0
Liveness: tcp-socket :8080 delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :8080 delay=0s timeout=1s period=10s #success=1 #failure=3
Startup: tcp-socket :8080 delay=0s timeout=1s period=5s #success=1 #failure=30
Environment: <none>
Mounts:
/app/config.properties from config-volume (ro,path="config.properties")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5jkk (ro)
Also, there's no need to close issues / feature requests that aren't 100% resolved.
Are requests perhaps missing in the deployment? I see them added only to pod template file... Maybe my assumption is wrong, I've just started learning Helm
This didn't fix it, still same config and same output from kubectl describe
I'll look into making a PR.
I was busy... Fixed it now, it was wrongly put in Pod definition, but resources are set for individual containers. Also stumbled upon a bug for ytproxy settings.
Helm chart name
piped
Describe the solution you'd like
Add option to set resource requests and limits
In addition to that set default requests and limits:
Additional Information
No response