thin-edge / tedge-container-plugin

thin-edge.io community plugin to manage containers and container groups (aka docker compose)
MIT License
4 stars 2 forks source link

ui does not display running containers #44

Closed ak3306361 closed 1 week ago

ak3306361 commented 1 week ago

In the ui, the expected container is not being displayed. The device has container running, yet none are visible in the UI.

Below shows the software management ui command output when manually executed:

aoi@raspberrypi:~ $ sudo -u tedge /etc/tedge/sm-plugins/container list
Loading setting file: /etc/tedge-container-plugin/env
aoi@raspberrypi:~ $ sudo -u tedge /etc/tedge/sm-plugins/container-group list
Loading setting file: /etc/tedge-container-plugin/env
opcua   latest

However, manually calling docker ps shows that there are indeed containers running on the device.

aoi@raspberrypi:~ $ docker ps
CONTAINER ID   IMAGE                                    COMMAND                  CREATED      STATUS      PORTS     NAMES
14f29139cbd2   ghcr.io/thin-edge/opcua-device-gateway   "/app/entrypoint.sh"     3 days ago   Up 2 days             opcua-gateway
66e9eb08825d   public.ecr.aws/takebishi/tkbs-dgwd20     "/bin/sh -c StartDev…"   3 days ago   Up 2 days             takebishi-device-gateway

Below shows some screenshots showing a devices where "containers" and "container groups" tab is visible, and a device where it is not visible.

screenshot 1 screenshot 2 screenshot 3 screenshot 4
reubenmiller commented 1 week ago

@ak3306361 I've just reformatted the ticket to focus on the actual issue as it was not 100% clear from the original post. I thought it might be useful to see how the ticket could be formulated so that the actual unexpected behaviour could be focused on (please don't take it negatively).

reubenmiller commented 1 week ago

@ak3306361 Can you please run the following command show post the output here? (the command just runs the tedge-container-monitor command which collects the runtime information of a given container, with debugging enabled)

sudo -u tedge sh -x /usr/bin/tedge-container-monitor opcua-gateway

Then afterwards, check the local MQTT broker to see what status has been posted.

tedge mqtt sub 'te/device/main/service/#'
ak3306361 commented 1 week ago

@reubenmiller Thank you for the feedback. I appreciate your effort in reformatting the ticket to focus on the actual issue.

aoi@raspberrypi:~ $ sudo -u tedge sh -x /usr/bin/tedge-container-monitor opcua-gateway
+ set -e
+ LOG_LEVEL=3
+ LOG_TIMESTAMPS=1
+ load_settings
+ SETTINGS_FILE=/etc/tedge-container-plugin/env
+ find /etc/tedge-container-plugin/env -perm 644
+ head -1
+ FOUND_FILE=/etc/tedge-container-plugin/env
+ [ -n /etc/tedge-container-plugin/env ]
+ debug Reloading setting file: /etc/tedge-container-plugin/env
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ . /etc/tedge-container-plugin/env
+ CONTAINER_CLI_OPTIONS=docker podman nerdctl
+ CONTAINER_CLI=docker
+ INTERVAL=60
+ TELEMETRY=1
+ META_INFO=1
+ MQTT_HOST=127.0.0.1
+ MQTT_PORT=1883
+ LOG_LEVEL=info
+ LOG_TIMESTAMPS=1
+ SERVICE_TYPE=container
+ GROUP_SERVICE_TYPE=container-group
+ PRUNE_IMAGES=0
+ VALIDATE_TAR_CONTENTS=0
+ CONTAINER_RUN_OPTIONS=--cpus 1 --memory 64m
+ convert_loglevel info
+ echo info+ 
tr [:upper:] [:lower:]
+ level=info
+ echo 3
+ LOG_LEVEL=3
+ get_loglevel_name 3
+ echo 3
+ tr [:upper:] [:lower:]
+ level=3
+ echo info
+ info Current log level: info (3)
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO Current log level: info (3)
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:38+0900 [pid=3381] INFO Current log level: info (3)
2024-07-04T12:53:38+0900 [pid=3381] INFO Current log level: info (3)
+ info Current interval: 60 (seconds)
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO Current interval: 60 (seconds)
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:38+0900 [pid=3381] INFO Current interval: 60 (seconds)
2024-07-04T12:53:38+0900 [pid=3381] INFO Current interval: 60 (seconds)
+ info Successfully loaded settings
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO Successfully loaded settings
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:38+0900 [pid=3381] INFO Successfully loaded settings
2024-07-04T12:53:38+0900 [pid=3381] INFO Successfully loaded settings
+ CONTAINER_CLI_OPTIONS=docker podman nerdctl
+ CONTAINER_CLI=docker
+ COMPOSE_CLI=
+ MONITOR_COMPOSE_PROJECTS=1
+ INTERVAL=60
+ TELEMETRY=1
+ META_INFO=1
+ MQTT_HOST=127.0.0.1
+ MQTT_PORT=1883
+ SERVICE_NAME=tedge-container-monitor
+ SERVICE_TYPE=container
+ GROUP_SERVICE_TYPE=container-group
+ SUB_PID=
+ printf \t
+ TAB=  
+ POSITIONAL=
+ [ 1 -gt 0 ]
+ [ -n  ]
+ POSITIONAL=opcua-gateway
+ shift
+ [ 0 -gt 0 ]
+ set -- opcua-gateway
+ tedge config get mqtt.topic_root
+ TOPIC_ROOT=te
+ tedge config get mqtt.device_topic_id
+ TOPIC_ID=device/main//
+ TOPIC_PREFIX=te/device/main
+ echo device/main//
+ sed s/\/*$//
+ parent=device/main
+ TOPIC_PREFIX=te/device/main
+ convert_loglevel 3
+ echo 3
+ tr [:upper:] [:lower:]
+ level=3
+ echo 3
+ LOG_LEVEL=3
+ command_exists tedge
+ command -v tedge
+ [ -z 127.0.0.1 ]
+ [ -z 1883 ]
+ [ -z docker ]
+ [ -n docker ]
+ [ 1 = 1 ]
+ docker stats --all --no-stream --format {{.ID}}\t{{.Name}}\t{{.CPUPerc}}\t{{.MemPerc}}\t{{.NetIO}}
+ [ 1 != 0 ]
+ [ -z  ]
+ docker compose
+ COMPOSE_CLI=docker compose
+ command_exists docker
+ command -v docker
+ [ 1 -gt 0 ]
+ [ opcua-gateway != * ]
+ NAME=opcua-gateway
+ info Checking health of opcua-gateway
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO Checking health of opcua-gateway
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:41+0900 [pid=3381] INFO Checking health of opcua-gateway
2024-07-04T12:53:41+0900 [pid=3381] INFO Checking health of opcua-gateway
+ check_health opcua-gateway
+ NAMES=
+ [ 1 -gt 0 ]
+ NAMES=opcua-gateway
+ docker ps -a --format {{.Names}}\t{{.State}}\t{{.Labels}} --filter name=opcua-gateway
+ grep -v com.docker.compose
+ IFS=   read -r NAME STATE _OTHER
+ [ 1 = 1 ]
+ debug Checking compose projects
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ docker ps --no-trunc --all --format {{or .Labels " "}}\t{{or .State " "}}\t{{.Label "com.docker.compose.project" }}\t{{.Label "com.docker.compose.service" }}
+ grep com.docker.compose
+ IFS=   read -r _LABELS STATE PROJECT_NAME PROJECT_SERVICE_NAME
+ CLOUD_SERVICE_NAME=opcua@opcua-gateway
+ STATE=running
+ echo running
+ tr [:upper:] [:lower:]
+ STATE=running
+ info State: service=opcua@opcua-gateway, state=running
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO State: service=opcua@opcua-gateway, state=running
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:41+0900 [pid=3381] INFO State: service=opcua@opcua-gateway, state=running
2024-07-04T12:53:41+0900 [pid=3381] INFO State: service=opcua@opcua-gateway, state=running
+ STATUS=up
+ register_service opcua@opcua-gateway container-group
+ name=opcua@opcua-gateway
+ type=container-group
+ printf {"@type":"service","name":"%s","type":"%s"} opcua@opcua-gateway container-group
+ message={"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
+ publish_retain te/device/main/service/opcua@opcua-gateway {"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
+ TOPIC=te/device/main/service/opcua@opcua-gateway
+ MESSAGE={"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
+ debug [te/device/main/service/opcua@opcua-gateway] {"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@opcua-gateway {"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
+ printf {"pid":"%s","status":"%s"} opcua@opcua-gateway up
+ MESSAGE={"pid":"opcua@opcua-gateway","status":"up"}
+ publish_health opcua@opcua-gateway {"pid":"opcua@opcua-gateway","status":"up"}
+ SERVICE_NAME=opcua@opcua-gateway
+ MESSAGE={"pid":"opcua@opcua-gateway","status":"up"}
+ publish_retain te/device/main/service/opcua@opcua-gateway/status/health {"pid":"opcua@opcua-gateway","status":"up"}
+ TOPIC=te/device/main/service/opcua@opcua-gateway/status/health
+ MESSAGE={"pid":"opcua@opcua-gateway","status":"up"}
+ debug [te/device/main/service/opcua@opcua-gateway/status/health] {"pid":"opcua@opcua-gateway","status":"up"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@opcua-gateway/status/health {"pid":"opcua@opcua-gateway","status":"up"}
+ IFS=   read -r _LABELS STATE PROJECT_NAME PROJECT_SERVICE_NAME
+ CLOUD_SERVICE_NAME=opcua@takebishi-device-gateway
+ STATE=running
+ echo running
+ tr [:upper:] [:lower:]
+ STATE=running
+ info State: service=opcua@takebishi-device-gateway, state=running
+ [ -n 3 ]
+ [ 3 -ge 3 ]
+ log INFO State: service=opcua@takebishi-device-gateway, state=running
+ level=INFO
+ shift
+ timestamp
+ [ 1 = 1 ]
+ date +%Y-%m-%dT%H:%M:%S%z 
+ echo 2024-07-04T12:53:41+0900 [pid=3381] INFO State: service=opcua@takebishi-device-gateway, state=running
2024-07-04T12:53:41+0900 [pid=3381] INFO State: service=opcua@takebishi-device-gateway, state=running
+ STATUS=up
+ register_service opcua@takebishi-device-gateway container-group
+ name=opcua@takebishi-device-gateway
+ type=container-group
+ printf {"@type":"service","name":"%s","type":"%s"} opcua@takebishi-device-gateway container-group
+ message={"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
+ publish_retain te/device/main/service/opcua@takebishi-device-gateway {"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
+ TOPIC=te/device/main/service/opcua@takebishi-device-gateway
+ MESSAGE={"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
+ debug [te/device/main/service/opcua@takebishi-device-gateway] {"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@takebishi-device-gateway {"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
+ printf {"pid":"%s","status":"%s"} opcua@takebishi-device-gateway up
+ MESSAGE={"pid":"opcua@takebishi-device-gateway","status":"up"}
+ publish_health opcua@takebishi-device-gateway {"pid":"opcua@takebishi-device-gateway","status":"up"}
+ SERVICE_NAME=opcua@takebishi-device-gateway
+ MESSAGE={"pid":"opcua@takebishi-device-gateway","status":"up"}
+ publish_retain te/device/main/service/opcua@takebishi-device-gateway/status/health {"pid":"opcua@takebishi-device-gateway","status":"up"}
+ TOPIC=te/device/main/service/opcua@takebishi-device-gateway/status/health
+ MESSAGE={"pid":"opcua@takebishi-device-gateway","status":"up"}
+ debug [te/device/main/service/opcua@takebishi-device-gateway/status/health] {"pid":"opcua@takebishi-device-gateway","status":"up"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@takebishi-device-gateway/status/health {"pid":"opcua@takebishi-device-gateway","status":"up"}
+ IFS=   read -r _LABELS STATE PROJECT_NAME PROJECT_SERVICE_NAME
+ [ 1 = 1 ]
+ check_container_info opcua-gateway
+ [ 1 = 1 ]
+ NAMES=
+ [ 1 -gt 0 ]
+ NAMES=opcua-gateway
+ debug Collecting container meta information
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ docker ps --no-trunc --all --filter name=opcua-gateway --format {{.ID}}\t{{or .Names " "}}\t{{or .Labels " "}}\t{{or .State " "}}\t{{or .Status " "}}\t{{.CreatedAt}}\t{{.Image}}\t{{or .Ports " "}}\t{{or .Networks " "}}\t{{or .RunningFor " "}}\t{{or .Size " "}}\t{{json (or .Command " ")}}
+ grep -v com.docker.compose
+ IFS=   read -r ID NAME _LABELS STATE STATUS CREATEDAT IMAGE PORTS NETWORKS RUNNINGFOR SIZE COMMAND
+ [ 1 = 1 ]
+ debug Collecting container-group meta information
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ docker ps --no-trunc --all --format {{.ID}}\t{{or .Names " "}}\t{{or .Labels " "}}\t{{or .State " "}}\t{{or .Status " "}}\t{{.CreatedAt}}\t{{.Image}}\t{{or .Ports " "}}\t{{or .Networks " "}}\t{{or .RunningFor " "}}\t{{or .Size " "}}\t{{json (or .Command " ")}}\t{{.Label "com.docker.compose.project" }}\t{{.Label "com.docker.compose.service" }}
+ grep com.docker.compose
+ IFS=   read -r ID NAME _LABELS STATE STATUS CREATEDAT IMAGE PORTS NETWORKS RUNNINGFOR SIZE COMMAND PROJECT_NAME PROJECT_SERVICE_NAME
+ CLOUD_SERVICE_NAME=opcua@opcua-gateway
+ [ -n opcua@opcua-gateway ]
+ printf {"containerId":"%s","containerName":"%s","state":"%s","containerStatus":"%s","createdAt":"%s","image":"%s","ports":"%s","networks":"%s","runningFor":"%s","filesystem":"%s","command":%s,"projectName":"%s","serviceName":"%s"} 9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b opcua-gateway running Up 18 hours 2024-07-03 15:58:18 +0900 JST ghcr.io/thin-edge/opcua-device-gateway  host 21 hours ago 104MB (virtual 614MB) "\"/app/entrypoint.sh\"" opcua opcua-gateway
+ message={"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
+ publish_retain te/device/main/service/opcua@opcua-gateway/twin/container {"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
+ TOPIC=te/device/main/service/opcua@opcua-gateway/twin/container
+ MESSAGE={"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
+ debug [te/device/main/service/opcua@opcua-gateway/twin/container] {"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@opcua-gateway/twin/container {"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
+ IFS=   read -r ID NAME _LABELS STATE STATUS CREATEDAT IMAGE PORTS NETWORKS RUNNINGFOR SIZE COMMAND PROJECT_NAME PROJECT_SERVICE_NAME
+ CLOUD_SERVICE_NAME=opcua@takebishi-device-gateway
+ [ -n opcua@takebishi-device-gateway ]
+ printf {"containerId":"%s","containerName":"%s","state":"%s","containerStatus":"%s","createdAt":"%s","image":"%s","ports":"%s","networks":"%s","runningFor":"%s","filesystem":"%s","command":%s,"projectName":"%s","serviceName":"%s"} 7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883 takebishi-device-gateway running Up 18 hours 2024-07-03 15:58:18 +0900 JST public.ecr.aws/takebishi/tkbs-dgwd20  host 21 hours ago 10.2MB (virtual 96.8MB) "\"/bin/sh -c StartDeviceGateway.sh\"" opcua takebishi-device-gateway
+ message={"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
+ publish_retain te/device/main/service/opcua@takebishi-device-gateway/twin/container {"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
+ TOPIC=te/device/main/service/opcua@takebishi-device-gateway/twin/container
+ MESSAGE={"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
+ debug [te/device/main/service/opcua@takebishi-device-gateway/twin/container] {"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub -r te/device/main/service/opcua@takebishi-device-gateway/twin/container {"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
+ IFS=   read -r ID NAME _LABELS STATE STATUS CREATEDAT IMAGE PORTS NETWORKS RUNNINGFOR SIZE COMMAND PROJECT_NAME PROJECT_SERVICE_NAME
+ [ 1 = 1 ]
+ check_telemetry
+ [ 1 = 1 ]
+ debug Collecting container stats
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ docker ps -a --format {{or .Names " "}}\t{{.Labels}}
+ grep -v com.docker.compose
+ + cuttr -f1 \n

+ CONTAINERS=
+ [ -n  ]
+ debug No containers found, therefore no metrics to collect
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ debug Collecting container-group stats
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ docker ps --format {{or .Names " "}}\t{{.Labels}}
+ grep com.docker.compose
+ tr \n  
+ cut -f1
+ CONTAINERS=opcua-gateway takebishi-device-gateway 
+ [ -n opcua-gateway takebishi-device-gateway  ]
+ IFS=  +  read -r NAME CPU_PERC MEM_PERC NET_IO
echo opcua-gateway takebishi-device-gateway 
+ docker stats --all --no-stream --format {{.Name}}\t{{.CPUPerc}}\t{{.MemPerc}}\t{{.NetIO}} opcua-gateway takebishi-device-gateway
+ docker ps -a --format {{.Label "com.docker.compose.project" }}@{{.Label "com.docker.compose.service" }} --filter name=opcua-gateway
+ CLOUD_SERVICE_NAME=opcua@opcua-gateway
+ echo 0B / 0B
+ sed s/[^0-9.].*//g
+ NET_IO=0
+ printf {"container":{"cpu":%s,"memory":%s,"netio":%s}} 28.29 9.82 0
+ message={"container":{"cpu":28.29,"memory":9.82,"netio":0}}
+ publish te/device/main/service/opcua@opcua-gateway/m/resource_usage {"container":{"cpu":28.29,"memory":9.82,"netio":0}}
+ TOPIC=te/device/main/service/opcua@opcua-gateway/m/resource_usage
+ MESSAGE={"container":{"cpu":28.29,"memory":9.82,"netio":0}}
+ debug [te/device/main/service/opcua@opcua-gateway/m/resource_usage] {"container":{"cpu":28.29,"memory":9.82,"netio":0}}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub te/device/main/service/opcua@opcua-gateway/m/resource_usage {"container":{"cpu":28.29,"memory":9.82,"netio":0}}
+ IFS=   read -r NAME CPU_PERC MEM_PERC NET_IO
+ docker ps -a --format {{.Label "com.docker.compose.project" }}@{{.Label "com.docker.compose.service" }} --filter name=takebishi-device-gateway
+ CLOUD_SERVICE_NAME=opcua@takebishi-device-gateway
+ echo 0B / 0B
+ sed s/[^0-9.].*//g
+ NET_IO=0
+ printf {"container":{"cpu":%s,"memory":%s,"netio":%s}} 6.49 1.82 0
+ message={"container":{"cpu":6.49,"memory":1.82,"netio":0}}
+ publish te/device/main/service/opcua@takebishi-device-gateway/m/resource_usage {"container":{"cpu":6.49,"memory":1.82,"netio":0}}
+ TOPIC=te/device/main/service/opcua@takebishi-device-gateway/m/resource_usage
+ MESSAGE={"container":{"cpu":6.49,"memory":1.82,"netio":0}}
+ debug [te/device/main/service/opcua@takebishi-device-gateway/m/resource_usage] {"container":{"cpu":6.49,"memory":1.82,"netio":0}}
+ [ -n 3 ]
+ [ 3 -ge 4 ]
+ command -v tedge
+ tedge mqtt pub te/device/main/service/opcua@takebishi-device-gateway/m/resource_usage {"container":{"cpu":6.49,"memory":1.82,"netio":0}}
+ IFS=   read -r NAME CPU_PERC MEM_PERC NET_IO
+ exit 0
aoi@raspberrypi:~ $ tedge mqtt sub 'te/device/main/service/#'
INFO: Connected
[te/device/main/service/c8y-firmware-plugin] {"@parent":"device/main//","@type":"service","type":"service"}
[te/device/main/service/c8y-firmware-plugin/status/health] {"pid":921,"status":"up","time":1720000644.3039103}
[te/device/main/service/mosquitto-c8y-bridge] {"@id":"raspi:device:main:service:mosquitto-c8y-bridge","@parent":"device/main//","@type":"service","name":"mosquitto-c8y-bridge","type":"service"}
[te/device/main/service/mosquitto-c8y-bridge/status/health] 1
[te/device/main/service/tedge-mapper-c8y] {"@parent":"device/main//","@type":"service","type":"service"}
[te/device/main/service/tedge-mapper-c8y/status/health] {"pid":934,"status":"up","time":1720000644.2879245}
[te/device/main/service/tedge-agent] {"@parent":"device/main//","@type":"service","type":"service"}
[te/device/main/service/tedge-agent/status/health] {"pid":932,"status":"up","time":1720000644.3687756}
[te/device/main/service/tedge-container-monitor] {"@type":"service","name":"tedge-container-monitor","type":"service"}
[te/device/main/service/tedge-container-monitor/status/health] {"status":"up","pid":"2101"}
[te/device/main/service/opcua@opcua-gateway] {"@type":"service","name":"opcua@opcua-gateway","type":"container-group"}
[te/device/main/service/opcua@opcua-gateway/status/health] {"pid":"opcua@opcua-gateway","status":"up"}
[te/device/main/service/opcua@opcua-gateway/twin/container] {"containerId":"9302f48b73be696bb36f5055e16f844e8238e41e0e5c377b54c26b38f7259e6b","containerName":"opcua-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"ghcr.io/thin-edge/opcua-device-gateway","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"104MB (virtual 614MB)","command":"\"/app/entrypoint.sh\"","projectName":"opcua","serviceName":"opcua-gateway"}
[te/device/main/service/opcua@takebishi-device-gateway] {"@type":"service","name":"opcua@takebishi-device-gateway","type":"container-group"}
[te/device/main/service/opcua@takebishi-device-gateway/status/health] {"pid":"opcua@takebishi-device-gateway","status":"up"}
[te/device/main/service/opcua@takebishi-device-gateway/twin/container] {"containerId":"7c55d6bfd1d7128ede13828dd7ba1b636ca45e060885da0c965bec1603772883","containerName":"takebishi-device-gateway","state":"running","containerStatus":"Up 18 hours","createdAt":"2024-07-03 15:58:18 +0900 JST","image":"public.ecr.aws/takebishi/tkbs-dgwd20","ports":"","networks":"host","runningFor":"21 hours ago","filesystem":"10.2MB (virtual 96.8MB)","command":"\"/bin/sh -c StartDeviceGateway.sh\"","projectName":"opcua","serviceName":"takebishi-device-gateway"}
[te/device/main/service/aoi@child01] {"@type":"service","name":"aoi@child01","type":"container-group"}
[te/device/main/service/aoi@child01/status/health] {"pid":"aoi@child01","status":"up"}
[te/device/main/service/aoi@child01/twin/container] {"containerId":"932c1782a4c407871b0472da26c49a2c0709bfc29ace1fc2c04e061d345930a7","containerName":"aoi-child01-1","state":"running","containerStatus":"Up 3 hours","createdAt":"2024-06-20 11:31:48 +0900 JST","image":"ghcr.io/thin-edge/tedge-demo-child:latest","ports":"","networks":"aoi_tedge","runningFor":"3 hours ago","filesystem":"2.1MB (virtual 178MB)","command":"\"python-tedge-agent\"","projectName":"aoi","serviceName":"child01"}
[te/device/main/service/aoi@child02] {"@type":"service","name":"aoi@child02","type":"container-group"}
[te/device/main/service/aoi@child02/status/health] {"pid":"aoi@child02","status":"up"}
[te/device/main/service/aoi@child02/twin/container] {"containerId":"45e9e40535ab7183bb2ff9b6d4d3f09a207eee13c06e0ab46e699d3c530d5bbb","containerName":"aoi-child02-1","state":"running","containerStatus":"Up 3 hours","createdAt":"2024-06-20 11:31:48 +0900 JST","image":"ghcr.io/thin-edge/tedge-demo-child:latest","ports":"","networks":"aoi_tedge","runningFor":"3 hours ago","filesystem":"2.1MB (virtual 178MB)","command":"\"python-tedge-agent\"","projectName":"aoi","serviceName":"child02"}
[te/device/main/service/aoi@tedge] {"@type":"service","name":"aoi@tedge","type":"container-group"}
[te/device/main/service/aoi@tedge/status/health] {"pid":"aoi@tedge","status":"up"}
[te/device/main/service/aoi@tedge/twin/container] {"containerId":"dbeaa8b668ca9cd666c9d77bf720f86c997e84bf497929e2678620308cab957b","containerName":"aoi-tedge-1","state":"running","containerStatus":"Up 3 hours","createdAt":"2024-06-20 11:31:48 +0900 JST","image":"ghcr.io/thin-edge/tedge-demo-main-systemd:latest","ports":"","networks":"aoi_tedge","runningFor":"3 hours ago","filesystem":"33.6MB (virtual 379MB)","command":"\"/lib/systemd/systemd\"","projectName":"aoi","serviceName":"tedge"}
reubenmiller commented 1 week ago

@ak3306361 everything looks ok from the thin-edge.io side, so maybe it is something to do with the UI plugin which isn't working as expected.

Can you provide the following:

ak3306361 commented 1 week ago

@reubenmiller I've refreshed the ui page(cmd+r), but it's still not showing up on the "services" tab of the "raspi" ,,,,,,

I'm not sure if this is helpful, I have checked the implementation in the following link (see attached second image). This is unrelated to the current issue,,,,, https://github.com/thin-edge/takebishi-devicegateway/blob/main/docs/example_opc_ua.md

スクリーンショット 2024-07-04 20 00 35 スクリーンショット 2024-07-04 19 52 57 スクリーンショット 2024-07-04 20 07 20

ak3306361 commented 1 week ago

For the communication with C8Y IoT Edge, I used a self-signed certificate in raspberry pi. On the raspberry pi, I connected to the C8Y IoT Edge URL, selected "Certificate is not valid," and saved the certificate. https://thin-edge.github.io/thin-edge.io/start/connect-c8y/

I apologize for straying off topic. I thought this might be a basic mistake on my part, so I wanted to provide this information just in case.

aoi@raspberrypi:~ $ sudo tedge config set c8y.root_cert_path /etc/ssl/certs/NTTcommunicationsCumulocityIoTEdgeInternalRootCA

スクリーンショット 2024-07-04 20 22 41

reubenmiller commented 1 week ago

It looks like the mapper is not connected properly as the status is showing as down in the UI…and if the mapper is not connected then all of the other data can’t be sent to the Cumulocity IoT instance.

Instead of manually setting the c8y.root_cert_path, I would add the edge's certificate to the device's global CA store...to do so use the following steps:

  1. Disconnect and undo the setting
    sudo tedge disconnect c8y
    sudo tedge config unset c8y.root_cert_path
  2. Follow these instructions to add the NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt
  3. Try and reconnect the mapper
    sudo tedge connect c8y
ak3306361 commented 1 week ago

@reubenmiller I apologize for the delayed response🙇

aoi@raspberrypi:~ $ sudo mv /usr/local/share/ca-certificates/NTTcommunicationsCumulocityIoTEdgeInternalRootCA NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt

aoi@raspberrypi:~ $ ls /usr/local/share/ca-certificates/
NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt
aoi@raspberrypi:~ $ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.

aoi@raspberrypi:~ $ ls /etc/ssl/certs | grep NTTcommunicationsCumulocityIoTEdgeInternalRootCA
NTTcommunicationsCumulocityIoTEdgeInternalRootCA.pem
aoi@raspberrypi:~ $ sudo tedge connect c8y
The system config file '/etc/tedge/system.toml' doesn't exist. Using '/bin/systemctl' as the default service manager

Detected mosquitto version >= 2.0.0
Checking if systemd is available.

Checking if configuration for requested bridge already exists.

Validating the bridge certificates.

Creating the device in Cumulocity cloud.

Saving configuration for requested bridge.

Restarting mosquitto service.

Awaiting mosquitto to start. This may take up to 5 seconds.

Enabling mosquitto service on reboots.

Successfully created bridge connection!

Sending packets to check connection. This may take up to 2 seconds.

Connection check is successful.

Checking if tedge-mapper is installed.

Starting tedge-mapper-c8y service.

Persisting tedge-mapper-c8y on reboot.

tedge-mapper-c8y service successfully started and enabled!

Enabling software management.

Checking if tedge-agent is installed.

Starting tedge-agent service.

Persisting tedge-agent on reboot.

tedge-agent service successfully started and enabled!

I can now see the container group🎉, but child devices are no longer appearing in UI. I apologize for any inconvenience, How should I address this? I'm checking the following logs.

opcua-gateway             | 2024-07-08 02:12:58.287  INFO 1 --- [    scheduler-2] c.c.o.c.g.s.SubscriptionUpdateScheduler  : Platform credentials are not available yet, skip updating subscription and will check again in the next round

スクリーンショット 2024-07-08 11 23 51

スクリーンショット 2024-07-08 11 44 28

In previous implementations, when child devices weren't appearing, the following Error log was outputted.

opcua-gateway | 2024-06-06 04:24:16.712 WARN 1 --- [T Call: d:raspi] c.c.o.c.g.p.c.PlatformProvider : Error caught when checking platform availability: java.net.UnknownHostException: [edgethingscloud.ug163.ft.nttcloud.net](http://edgethingscloud.ug163.ft.nttcloud.net/): Name or service not known

I resolved it by implementing the following command. This time, there are no log related to domains. But I performed the following steps because ping(ping edgethingscloud.ug163.ft.nttcloud.net) was not reaching, but child devices are still not appearing.

sudo docker exec -it opcua-gateway bash -c "echo '10.8.163.90 edgethingscloud.ug163.ft.nttcloud.net' >> /etc/hosts"
sudo docker exec -it opcua-gateway /bin/sh -c "apt-get update && apt-get install -y iputils-ping"
sudo docker cp /etc/ssl/certs/NTTcommunicationsCumulocityIoTEdgeInternalRootCA opcua-gateway:/app
sudo docker exec -it opcua-gateway /bin/bash

keytool -import -trustcacerts -file /app/NTTcommunicationsCumulocityIoTEdgeInternalRootCA -keystore /usr/lib/jvm/java-11-openjdk-arm64/lib/security/cacerts -alias ntt ←←←( Implemented in the container )

sudo docker exec -it opcua-gateway ping edgethingscloud.ug163.ft.nttcloud.net
reubenmiller commented 1 week ago

By "child child are still not appearing", you are referring to the OPCUA Gateway device not appearing right?

After manually editing containers (e.g. editing the /etc/hosts, and the java keystore), you'll most likely have to restart the container before the process running inside it will pick up the changes (especially the keystore change). However manually editing containers is never recommended, and instead you should be creating your own container based on the ghcr.io/thin-edge/opcua-device-gateway image and adding the appropriate keystore and then you can modify the docker-compose.yaml file to add the etc hosts entry using the extra-hosts setting.

ak3306361 commented 1 week ago

By "child child are still not appearing", you are referring to the OPCUA Gateway device not appearing right?

Yes, OPCUA Gateway device not appearing in UI.(child device not appearing in raspi UI)

「keytool -import -trustcacerts -file~~~」 ↑↑ Is this a necessary implementation? Since I confirmed the following log during the previous implementation, I implemented it this way. However, this time the log is not being output, so I wanted to confirm with you just in case.

opcua-gateway       | 2024-06-07 09:23:35.170 WARN 1 --- [T Call: d:raspi] c.c.o.c.g.p.c.PlatformProvider      : Error caught when checking platform availability: [javax.net](http://javax.net/).ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
reubenmiller commented 1 week ago

By "child child are still not appearing", you are referring to the OPCUA Gateway device not appearing right?

Yes, OPCUA Gateway device not appearing in UI.(child device not appearing in raspi UI)

「keytool -import -trustcacerts -file~~~」 ↑↑ Is this a necessary implementation? Since I confirmed the following log during the previous implementation, I implemented it this way. However, this time the log is not being output, so I wanted to confirm with you just in case.

opcua-gateway       | 2024-06-07 09:23:35.170 WARN 1 --- [T Call: d:raspi] c.c.o.c.g.p.c.PlatformProvider      : Error caught when checking platform availability: [javax.net](http://javax.net/).ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

So did you restart the opcua-device-gateway container after changing the java truststore?

And if so, then you need to provide some log output of the opcua-device-gateway agent, but this issue is more suited to being created on opcua-device-gateway-container as this is unrelated to the tedge-container-plugin (and the original problem was also more due to configuration issue rather than an issue with the tedge-container-plugin).

ak3306361 commented 1 week ago

So did you restart the opcua-device-gateway container after changing the java truststore?

No, I didn't restarted the opcua-device-gateway container. 「keytool -import -trustcacerts -file~~~」 When I implemented this(↑) , the certificate error disappeared, and the child devices appearing in raspi UI.

reubenmiller commented 1 week ago

So did you restart the opcua-device-gateway container after changing the java truststore?

No, I didn't restarted the opcua-device-gateway container. 「keytool -import -trustcacerts -file~~~」 When I implemented this(↑) , the certificate error disappeared, and the child devices appearing in raspi UI.

So there the OPCUA device is now appearing?

ak3306361 commented 1 week ago

I apologize for the confusion.

So there the OPCUA device is now appearing?

No, OPCUA device(child device) is not now appearing in UI. スクリーンショット 2024-07-08 21 24 29

スクリーンショット 2024-07-08 21 29 31

The following comments are about the previous implementation.(This is not the implementation method you showed me through the link) https://thin-edge.github.io/thin-edge.io/operate/security/cloud-authentication/#debianubunturaspberrypi-os

No, I didn't restarted the opcua-device-gateway container. 「keytool -import -trustcacerts -file~~~」 When I implemented this(↑) , the certificate error disappeared, and the child devices appearing in raspi UI.

Here is my current status.

aoi@raspberrypi:~/takebishi-devicegateway/opcua $ cat docker-compose.yml 
name: opcua
services:

  opcua-gateway:
    container_name: opcua-gateway
    # Image is maintained under: https://github.com/thin-edge/opcua-device-gateway-container
    image: ghcr.io/thin-edge/opcua-device-gateway
    restart: always
    network_mode: "host"
    extra_hosts:
      - "takebishi-device-gateway:127.0.0.1"
      - "edgethingscloud.ug163.ft.nttcloud.net:10.8.163.90"
    environment:
      # OPCUA Gateway info
      - OPCUA_GATEWAY_IDENTIFIER=${OPCUA_GATEWAY_IDENTIFIER:-tedgeOPCUAGateway}
      - OPCUA_GATEWAY_NAME=${OPCUA_GATEWAY_NAME:-thin-edge OPCUA gateway}

      # thin-edge.io MQTT broker
      - MQTT_BROKER=localhost:1883
    volumes:
    - type: bind
      source: /etc/opcua
      target: /data
    # Provide access to thin-edge.io configuration
    - type: bind
      source: /etc/tedge
      target: /etc/tedge

  takebishi-device-gateway:
    container_name: takebishi-device-gateway
    image: public.ecr.aws/takebishi/tkbs-dgwd20
    restart: always
    network_mode: "host"
    environment:
      - webport=80
      - websport=443
#    ports:
#      - "8080:80"
#      - "443:443"
#      - "52220:52220"
#      - "21:21"
#      - "30000-30009:30000-30009"
#      - "57510:57510"
#    devices:
#      - "/dev/ttyACM0"
#      - "/dev/hidraw0"
    volumes:
      - type: bind
        source: /etc/takebishi/sd_card
        target: /mnt/sdcard
      - type: bind
        source: /etc/takebishi/data
        target: /etc/dxpgateway
aoi@raspberrypi:~/takebishi-devicegateway/opcua $ docker compose up
[+] Running 2/2
 ✔ Container takebishi-device-gateway  Created                                                                                                                                                  0.3s 
 ✔ Container opcua-gateway             Created                                                                                                                                                  0.3s 
Attaching to opcua-gateway, takebishi-device-gateway
takebishi-device-gateway  | Starting DeviceGateway
takebishi-device-gateway  | startwait:1 procchk:2 webport:80 websport:443
takebishi-device-gateway  | sport1:notset 2:notset 3:notset 4:notset 5:notset 6:notset 7:notset 8:notset
takebishi-device-gateway  | 9:notset 10:notset 11:notset 12:notset 13:notset 14:notset 15:notset 16:notset
opcua-gateway             | Using value from tedge: DEVICE_ID=raspi
opcua-gateway             | Using value from tedge: C8Y_BASEURL=https://edgethingscloud.ug163.ft.nttcloud.net
opcua-gateway             | Prefixing OPCUA_GATEWAY_IDENTIFIER with the device_id to avoid identity clashes
opcua-gateway             |   OPCUA_GATEWAY_IDENTIFIER: raspi:kamide
opcua-gateway             |   OPCUA_GATEWAY_NAME: thin-edge OPCUA gateway
opcua-gateway             | Settings
opcua-gateway             | Starting the opcua-device-gateway...
takebishi-device-gateway  | Start main process.
takebishi-device-gateway  | Start setting process.
takebishi-device-gateway  | Start lighttpd.
takebishi-device-gateway  | UA Server: Initializing Stack...
opcua-gateway             | 
opcua-gateway             |   .   ____          _            __ _ _
opcua-gateway             |  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
opcua-gateway             | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
opcua-gateway             |  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
opcua-gateway             |   '  |____| .__|_| |_|_| |_\__, | / / / /
opcua-gateway             |  =========|_|==============|___/=/_/_/_/
opcua-gateway             |  :: Spring Boot ::               (v2.7.17)
opcua-gateway             | 
~~~~~~
~~~~~~
aoi@raspberrypi:~ $ sudo docker cp /usr/local/share/ca-certificates/NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt opcua-gateway:/app/
Successfully copied 3.07kB to opcua-gateway:/app/
aoi@raspberrypi:~ $ sudo docker exec -it opcua-gateway /bin/bash
root@raspberrypi:/app# ls
NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt  application-tenant.yaml  data  entrypoint.sh  logging.xml  opcua-device-gateway.jar
root@raspberrypi:/app# keytool -import -trustcacerts -file /app/NTTcommunicationsCumulocityIoTEdgeInternalRootCA.crt -keystore /usr/lib/jvm/java-11-openjdk-arm64/lib/security/cacerts -alias ntt
Warning: use -cacerts option to access cacerts keystore
Enter keystore password:  
Owner: CN=NTTcommunications Cumulocity IoT Edge Internal Root CA, O=NTTcommunications
Issuer: CN=NTTcommunications Cumulocity IoT Edge Internal Root CA, O=NTTcommunications
Serial number: 18fccffca55
Valid from: Fri May 31 04:54:26 UTC 2024 until: Sun May 07 04:54:26 UTC 2124
Certificate fingerprints:
     SHA1: 27:4D:8B:C6:44:78:72:4B:51:AB:5C:B3:DD:24:E2:12:37:14:54:78
     SHA256: B3:CB:67:C0:7D:8C:AE:99:FC:69:59:AC:BD:9E:DB:0F:71:AB:7A:94:61:89:45:C5:FC:72:A1:4C:8E:16:6F:55
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3
Extensions: 
#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
  CA:true
  PathLen:2147483647
]
#2: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  clientAuth
  serverAuth
]
#3: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Key_CertSign
]
#4: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 2F 4B 22 9E F6 78 69 DB   56 30 64 CE 78 CF A2 81  /K"..xi.V0d.x...
0010: CE 8A B9 95                                        ....
]
]
Trust this certificate? [no]:  yes
Certificate was added to keystore

However manually editing containers is never recommended

I have the same understanding. My current understanding is that embedding the certificate into the container beforehand, similar to the '/etc/hosts', should allow the recognition of Raspi's child devices. Is this correct? I'm currently verifying the method.↑

I apologize if I misunderstood something🙇

reubenmiller commented 1 week ago

It's a bit hard for me to verify if the java truststore is correctly configured (as this is not part of thin-edge.io), but I guess it looks ok?

Maybe try switching to the following image (as there was a changed done last week), and I think there might be an issue with the opcua-device-gateway image (which uses the opcua-device-gateway-1020 image by default).

ghcr.io/thin-edge/opcua-device-gateway-1018:20240701.1034
ak3306361 commented 1 week ago

Using the image you provided worked perfectly. Thank you very much!!

I have one question about the tedge-container-plugin functionality. For example, is it possible to deploy containers to a Raspberry Pi using this plugin?

I think this functionality is not available based on the information from the following link, but I wanted to confirm just in case. https://github.com/thin-edge/tedge-container-plugin?tab=readme-ov-file#ui-plugin

reubenmiller commented 1 week ago

Ok good to hear. I’ll have to report back to the team that is responsible for the opcua-device-gateway to see why the 1020 version isn’t working as expected (maybe some configuration has changed).

As for installing and uninstalling containers and container groups (aka docker compose), yes you can, that is explained in the second point in this repos readme, https://github.com/thin-edge/tedge-container-plugin?tab=readme-ov-file#what-will-be-deployed-to-the-device

reubenmiller commented 1 week ago

A software plugin (sm-plugin for short), is a thin-edge.io extension point which allows users to install, remove and list software of a given type via the software tab in the device management UI (e.g. the default ui provided out of the box by Cumulocity IoT)

ak3306361 commented 1 week ago

If I want to deploy an nginx container(for example) to a Raspberry Pi using Cumulocity IoT Edge UI, how should I proceed? I want to implement the 1-call container deployment.(Any container image OK)

I understand the concept bellow, but I'm not sure about the implementation details. I apologize for my lack of knowledge.

https://github.com/thin-edge/tedge-container-plugin?tab=readme-ov-file#installremove-single-containers

reubenmiller commented 1 week ago

It might be worthwhile checking out the official Cumulocity IoT documentation how to manage software via the UI. This goes through the general process how to use software, and then the details of what you need to enter in different fields is already documented here.

But a few general tips:

Firstly, we recommend to use the "container-group" type which is just a docker-compose.yaml file...the "container-group" software type allows more control over aspects of the running container/s (e.g. publishing ports, defining which networks to use, etc.). So for nginx, you could create a docker-compose.yaml file which has everything you need to run the nginx container.

Then, to deploy the "container-group" to the device you need to:

  1. Create a software repository item in the Device Management Application (below shows the information that you should use)

    Property Value
    name nginx (but it can be anything)
    version 1.0.0 (but it can be anything, but some kind of meaning versioning is useful)
    description (add some meaningful description)
    software type container-group (this indicates that the software item is a docker compose file)

    Then upload the docker-compose.yaml to the same dialog box (where you're entering the software item info in)

  2. Go to your device in the Device Management Application

  3. Open the "Software" tab, and following the official docs instructions

ak3306361 commented 1 week ago

Thank you. I was able to deploy nginx. I really appreciate your kind help🙇

reubenmiller commented 1 week ago

Ok, so I'll close the ticket now.

But glad that everything is working now :)