When starting a new container with recently published images, zenohd fails to start because of a missing environment variable. This seems to apply to all images published using the new release process. Namely, 0.10.1-rc is not affected, but 0.11.0-rc.2 is.
A simple workaround for the time being is to pass the missing variable to the command ourselves.
$ docker run -e BINARY=zenohd -p 7447:7447/tcp -p 8000:8000/tcp eclipse/zenoh:0.11.0-rc.2
#!/bin/ash
cat /entrypoint.sh
echo " * Starting: /$BINARY $*"
exec /$BINARY $*
* Starting: /zenohd
2024-04-30T12:07:36.238572Z INFO main ThreadId(01) zenohd: zenohd v0.11.0-rc.2 built with rustc 1.72.0 (5680fa18f 2023-08-23)
2024-04-30T12:07:36.241575Z INFO main ThreadId(01) zenohd: Initial conf: {"access_control":{"default_permission":"deny","enabled":false,"rules":null},"adminspace":{"enabled":true,"permissions":{"read":true,"write":false}},"aggregation":{"publishers":[],"subscribers":[]},"connect":{"endpoints":[],"exit_on_failure":null,"retry":null,"timeout_ms":null},"downsampling":[],"id":"490bec4e0d7ec9d12b84ddee723d58b7","listen":{"endpoints":["tcp/[::]:7447"],"exit_on_failure":null,"retry":null,"timeout_ms":null},"metadata":null,"mode":"router","plugins":{"rest":{"__required__":true,"http_port":"8000"}},"plugins_loading":{"enabled":true,"search_dirs":null},"queries_default_timeout":null,"routing":{"peer":{"mode":null},"router":{"peers_failover_brokering":null}},"scouting":{"delay":null,"gossip":{"autoconnect":null,"enabled":null,"multihop":null},"multicast":{"address":null,"autoconnect":null,"enabled":true,"interface":null,"listen":null},"timeout":null},"timestamping":{"drop_future_timestamp":null,"enabled":null},"transport":{"auth":{"pubkey":{"key_size":null,"known_keys_file":null,"private_key_file":null,"private_key_pem":null,"public_key_file":null,"public_key_pem":null},"usrpwd":{"dictionary_file":null,"password":null,"user":null}},"link":{"protocols":null,"rx":{"buffer_size":65535,"max_message_size":1073741824},"tls":{"client_auth":null,"client_certificate":null,"client_private_key":null,"root_ca_certificate":null,"server_certificate":null,"server_name_verification":null,"server_private_key":null},"tx":{"batch_size":65535,"keep_alive":4,"lease":10000,"queue":{"backoff":100,"congestion_control":{"wait_before_drop":1000},"size":{"background":1,"control":1,"data":4,"data_high":2,"data_low":2,"interactive_high":1,"interactive_low":1,"real_time":1}},"sequence_number_resolution":"32bit","threads":3},"unixpipe":{"file_access_mask":null}},"multicast":{"compression":{"enabled":false},"join_interval":2500,"max_sessions":1000,"qos":{"enabled":false}},"shared_memory":{"enabled":false},"unicast":{"accept_pending":100,"accept_timeout":10000,"compression":{"enabled":false},"lowlatency":false,"max_links":1,"max_sessions":1000,"qos":{"enabled":true}}}}
2024-04-30T12:07:36.243740Z INFO main ThreadId(01) zenoh::net::runtime: Using ZID: 490bec4e0d7ec9d12b84ddee723d58b7
2024-04-30T12:07:36.251114Z INFO main ThreadId(01) zenoh::plugins::loader: Loading required plugin "rest"
2024-04-30T12:07:36.263541Z INFO main ThreadId(01) zenoh::plugins::loader: Starting required plugin "rest"
2024-04-30T12:07:36.280490Z INFO main ThreadId(01) zenoh::plugins::loader: Successfully started plugin rest from "/libzenoh_plugin_rest.so"
2024-04-30T12:07:36.280533Z INFO main ThreadId(01) zenoh::plugins::loader: Finished loading plugins
2024-04-30T12:07:36.292864Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::42:acff:fe11:3]:7447
2024-04-30T12:07:36.293311Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/172.17.0.3:7447
2024-04-30T12:07:36.303035Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: zenohd listening scout messages on 224.0.0.224:7446
To reproduce
docker run -p 7447:7447/tcp -p 8000:8000/tcp eclipse/zenoh:0.11.0-rc.2
System info
The issue is within the built container images, so the system it's run on should not affect the results.
Describe the bug
When starting a new container with recently published images, zenohd fails to start because of a missing environment variable. This seems to apply to all images published using the new release process. Namely,
0.10.1-rc
is not affected, but0.11.0-rc.2
is.A simple workaround for the time being is to pass the missing variable to the command ourselves.
To reproduce
System info
The issue is within the built container images, so the system it's run on should not affect the results.