nats-io / nats-box

A container with NATS utilities
Apache License 2.0
117 stars 35 forks source link

Nats oriented images don't support arm64 even though they claim they do #21

Closed kamilgregorczyk closed 1 year ago

kamilgregorczyk commented 3 years ago

Hello, I've been using eats-streaming on arm64 based cluster and recently I moved to Jetstream, I found that:

✅ NATS supports arm64 without any problems

All say that the latest tag has arm64 manifest but it's actually not compatible with arm64, I'm getting exec format error on arm64 machine, while NATS works perfectly on exactly the same server.

kozlovic commented 3 years ago

Not clear on what is not working: NATS Server or (config-reloader, nats-box, etc)? Because this is the repo to produce docker image of the NATS Server.

kamilgregorczyk commented 3 years ago

config-reloader, nats-box, etc don't work even though they have arm64 manifests

kozlovic commented 3 years ago

Not sure where you get that they have manifest that say that they support arm64?

$ docker run --rm -ti mplatform/mquery synadia/nats-box:latest
Image: synadia/nats-box:latest (digest: sha256:caf0c9fe15a9a88d001c74fd9d80f7f6fd57474aa243cd63a9a086eda9e202be)
 * Manifest List: No (Image type: application/vnd.docker.distribution.manifest.v2+json)
 * Supports: linux/amd64

as opposed to NATS server for instance:

$ docker run --rm -ti mplatform/mquery nats:latest
Image: nats:latest (digest: sha256:e976c394120e489ed76b54ef3a4cc2ff9bbe34161aaa623070216204131ce123)
 * Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
 * Supported platforms:
   - linux/amd64
   - linux/arm/v6
   - linux/arm/v7
   - linux/arm64/v8
   - windows/amd64:10.0.17763.1999
kamilgregorczyk commented 3 years ago

natsio is used in the helm chart, not synadia ?

kamilgregorczyk commented 3 years ago

example: https://github.com/nats-io/k8s/blob/master/helm/charts/nats/values.yaml#L255

https://hub.docker.com/r/natsio/nats-boot-config/tags?page=1&ordering=last_updated

kozlovic commented 3 years ago

@wallyqs @variadico Not sure who creates the docker images for the aforementioned tools in the natsio docker repo, but the one hosted on Synadia correctly shows only support for linux/amd64 while the one under natsio shows:

$ docker run --rm -ti mplatform/mquery natsio/nats-box:latest
Image: natsio/nats-box:latest (digest: sha256:51f09970f8fd979bdfc8ff9b38205030384e4592de05cf52c065f9c0ff8bc5de)
 * Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
 * Supported platforms:
   - linux/amd64
   - linux/arm64

Same for boot-config, etc..

wallyqs commented 3 years ago

Thanks we need to look into adding support.

For reloader and boot config are here: https://github.com/nats-io/nack/tree/main/docker

And for the exporter: https://github.com/nats-io/prometheus-nats-exporter/blob/master/docker/linux/amd64/Dockerfile

kamilgregorczyk commented 3 years ago

so from your point of view, which repo is more reliable synadia/ or natsio?

kamilgregorczyk commented 3 years ago

ah yes, the box works, the reloader still crashes (from natsio):

➜  ~ kubectl describe pod nats-2
Name:         nats-2
Namespace:    default
Priority:     0
Node:         worker3/192.168.0.204
Start Time:   Tue, 06 Jul 2021 22:45:14 +0200
Labels:       app.kubernetes.io/instance=nats
              app.kubernetes.io/name=nats
              controller-revision-hash=nats-6c4b877ddd
              statefulset.kubernetes.io/pod-name=nats-2
Annotations:  <none>
Status:       Running
IP:           10.42.3.171
IPs:
  IP:           10.42.3.171
Controlled By:  StatefulSet/nats
Containers:
  nats:
    Container ID:  containerd://35c668c8ddb5232b27df0e422b58b7c2f2ca6fbe5c57371e1f9e33d141b3de92
    Image:         nats:alpine3.14
    Image ID:      docker.io/library/nats@sha256:8e3fd5de8c5ec5bd9055ae6a38acad7e7c42aba840e09f603c667e9f2f069b43
    Ports:         4222/TCP, 7422/TCP, 7522/TCP, 6222/TCP, 8222/TCP, 7777/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      nats-server
      --config
      /etc/nats-config/nats.conf
    State:          Running
      Started:      Tue, 06 Jul 2021 22:45:17 +0200
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8222/ delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8222/ delay=10s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:           nats-2 (v1:metadata.name)
      POD_NAMESPACE:      default (v1:metadata.namespace)
      CLUSTER_ADVERTISE:  $(POD_NAME).nats.$(POD_NAMESPACE).svc.cluster.local.
    Mounts:
      /data/jetstream from nats-js-pvc (rw)
      /etc/nats-config from config-volume (rw)
      /var/run/nats from pid (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gp4w2 (ro)
  reloader:
    Container ID:  containerd://5f47082b7b3e0c5d3105d85ddb709c2fdc49e2b23250df7add4403f84b4d431e
    Image:         natsio/nats-server-config-reloader:0.6.1
    Image ID:      docker.io/natsio/nats-server-config-reloader@sha256:b820cd3eaf261e146417d2273fa94223e6205d53419b7ce1f95b0e5751e8ee00
    Port:          <none>
    Host Port:     <none>
    Command:
      nats-server-config-reloader
      -pid
      /var/run/nats/nats.pid
      -config
      /etc/nats-config/nats.conf
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 06 Jul 2021 22:45:26 +0200
      Finished:     Tue, 06 Jul 2021 22:45:26 +0200
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /etc/nats-config from config-volume (rw)
      /var/run/nats from pid (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gp4w2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nats-js-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nats-js-pvc-nats-2
    ReadOnly:   false
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nats-config
    Optional:  false
  pid:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  default-token-gp4w2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gp4w2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  27s                default-scheduler  Successfully assigned default/nats-2 to worker3
  Normal   Pulled     26s                kubelet            Container image "nats:alpine3.14" already present on machine
  Normal   Created    26s                kubelet            Created container nats
  Normal   Started    25s                kubelet            Started container nats
  Normal   Pulling    25s                kubelet            Pulling image "natsio/nats-server-config-reloader:0.6.1"
  Normal   Pulled     20s                kubelet            Successfully pulled image "natsio/nats-server-config-reloader:0.6.1"
  Warning  BackOff    14s (x2 over 15s)  kubelet            Back-off restarting failed container
  Normal   Pulled     1s (x2 over 19s)   kubelet            Container image "natsio/nats-server-config-reloader:0.6.1" already present on machine
  Normal   Created    1s (x3 over 20s)   kubelet            Created container reloader
  Normal   Started    1s (x3 over 20s)   kubelet            Started container reloader

logs:

  reloader
standard_init_linux.go:211: exec user process caused "exec format error"
variadico commented 3 years ago

These images have been updated to work on ARM. We tested on ARM 64 Ubuntu.