Closed rodfsoares closed 4 months ago
UPDATE: so far I've been testing with a real Kubernetes cluster, but I've since tried the community chart on a local colima
instance, and I've actually been able to get Console's health check to pass by manually starting the server:
kubectl exec jitsu-console-6b977f89d7-462q2 -it -- bash
Defaulted container "jitsu" out of: jitsu, wait-for-tokens (init), wait-for-migration (init)
root@jitsu-console-6b977f89d7-462q2:/app# cd webapps/console/
root@jitsu-console-6b977f89d7-462q2:/app/webapps/console# HOSTNAME=0.0.0.0 node server.js & echo $! > /app/console.pid
[1] 15
root@jitsu-console-6b977f89d7-462q2:/app/webapps/console# ▲ Next.js 13.5.6
- Local: http://localhost:3000
- Network: http://0.0.0.0:3000
✓ Ready in 8.4s
(node:15) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
2024-07-11 11:15:09.738Z INFO [db]: Initializing prisma
2024-07-11 11:15:09.901Z INFO [singleton]: ️⚡️⚡️⚡️ prisma connected in 165ms!
2024-07-11 11:15:09.940Z INFO [singleton]: ️⚡️⚡️⚡️ pg connected in 28ms!
prisma:info Starting a postgresql pool with 5 connections.
2024-07-11 11:15:14.273Z INFO [db]: Connecting new client postgres://jitsu:jitsu@jitsu-postgresql:5432/jitsu?schema=newjitsu. Pool stat: idle=0, waiting=0, total=1. Default schema: newjitsu
Then running the health check from another exec:
kubectl exec jitsu-console-6b977f89d7-462q2 -it -- curl -s "http://localhost:3000/api/healthcheck" -w '%{http_code}'
Defaulted container "jitsu" out of: jitsu, wait-for-tokens (init), wait-for-migration (init)
{"status":"ok","prisma":{"status":"ok","ms":3608},"postgres":{"status":"ok","ms":274}}200%
kubectl exec jitsu-console-6b977f89d7-462q2 -it -- curl -s "http://jitsu-console-6b977f89d7-462q2:3000/api/healthcheck" -w '%{http_code}'
Defaulted container "jitsu" out of: jitsu, wait-for-tokens (init), wait-for-migration (init)
{"status":"ok","prisma":{"status":"ok","ms":122},"postgres":{"status":"ok","ms":69}}200%
Maybe the issue is on our real Kubernetes infra exclusively 🤔 ? I'll investigate further.
UPDATE: I've figured out the root cause, all details here: https://github.com/stafftastic/jitsu-chart/issues/37#issuecomment-2223116724
But TL;DR: cURL with hostname -f
works on ipv4 Kubernetes clusters, but the script will always fail for ipv6 clusters (which is what we're using) because cURL
requires a specific option to handle ipv6 addresses.
So now I truly believe we should switch to localhost
- easier than adjusting the script to call cURL differently according to IP version.
Summary
Hello,
Would it be possible to use
localhost
instead ofhostname
in Console's entrypoint script, please? Specifically this part here: https://github.com/jitsucom/jitsu/blob/58fac376480df18fe7dccadff8714abc1bc4f737/docker-start-console.sh#L54We're trying to self-host Jitsu, but Console's entrypoint script fails the health check because our Pod's
/etc/hosts
doesn't include the Pod's name (whichhostname -f
returns), so thecURL
command returns a000
.We modified the script on a live Pod to use
localhost
to test, and only then does the script proceed fine.Was there a reason why you changed to
hostname
a few PRs ago? https://github.com/jitsucom/jitsu/pull/1077/commits/06a8db2dc4e6017f21532a83ea25cb5045eedbfa#diff-ff5878f38438ed7c50e9ba556a4312ea11c93922c8eeb25f81dd102754d6221aL45We believe that
localhost
would be a more standard solution for both Kubernetes and Docker Compose deployments.We've tried deploying via the community Helm chart and via our own custom charts, but we ran into the same issue regardless: https://github.com/stafftastic/jitsu-chart
Thanks in advance for your consideration 🙏
System configuration and versions
Artifacts (logs, etc)