kubeedge / kubeedge

Kubernetes Native Edge Computing Framework (project under CNCF)
https://kubeedge.io
Apache License 2.0
6.69k stars 1.71k forks source link

使用v1.17的Onvif-mapper连接摄像头出错 #5688

Open Walterbishop233 opened 3 months ago

Walterbishop233 commented 3 months ago

问题描述: 我根据onvif-mapper描述从边缘节点获取摄像头数据。我用的是容器镜像执行的onvif-mapper,在执行kubectl apply -f ./deployment.yaml后,容器一直处于ContainerCreating状态。下面是相关描述:

  1. kubectl get pods 执行结果:

    kube@master1:~/kubeEdge/code/onvif-mapper-self/cmd$ kubectl get pods -A
    NAMESPACE        NAME                                 READY   STATUS              RESTARTS       AGE
    default          onvif-mapper-test-7b58f6d4dc-d7vlb   0/1     ContainerCreating   0              40m
    kube-flannel     kube-flannel-cloud-ds-cdrs6          1/1     Running             4 (16h ago)    39h
    kube-flannel     kube-flannel-cloud-ds-lnjlx          1/1     Running             4 (16h ago)    39h
    kube-flannel     kube-flannel-edge-ds-fblgr           1/1     Running             1 (16h ago)    17h
    kube-system      coredns-7bdc4cb885-j5w8m             1/1     Running             4 (16h ago)    39h
    kube-system      coredns-7bdc4cb885-rgs9n             1/1     Running             4 (16h ago)    39h
    kube-system      etcd-master1                         1/1     Running             38 (16h ago)   39h
    kube-system      kube-apiserver-master1               1/1     Running             48 (16h ago)   39h
    kube-system      kube-controller-manager-master1      1/1     Running             52 (16h ago)   39h
    kube-system      kube-proxy-flw89                     1/1     Running             4 (16h ago)    39h
    kube-system      kube-proxy-grztb                     1/1     Running             4 (16h ago)    39h
    kube-system      kube-scheduler-master1               1/1     Running             55 (16h ago)   39h
    kubeedge         cloudcore-dc75f4b46-8wpwn            1/1     Running             5 (71m ago)    18h
    kubeedge         edge-eclipse-mosquitto-kpvrq         1/1     Running             0              62m
    metallb-system   controller-8d9cf599f-zb6bk           0/1     CrashLoopBackOff    17 (29s ago)   39m
    metallb-system   speaker-6v9tx                        1/1     Running             0              58m
    metallb-system   speaker-mzjpg                        0/1     Evicted             0              17s
  2. cloudcore日志:

    I0625 11:39:24.682549       1 upstream.go:89] Dispatch message: 60626841-7fa4-40ff-8a22-20ea11234cb5
    I0625 11:39:24.682576       1 upstream.go:96] Message: 60626841-7fa4-40ff-8a22-20ea11234cb5, resource type is: membership/detail
    E0625 11:39:24.754658       1 upstream.go:1044] message: 4e2d493e-1176-42ae-a5e0-850971daacca process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
    E0625 11:39:34.755786       1 upstream.go:1044] message: 04bded4c-1b91-43aa-926d-e083dd3b2bc6 process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
    E0625 11:39:44.755394       1 upstream.go:1044] message: 04ca0a30-fdd0-465f-8876-b1d98a11037a process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
    E0625 11:39:54.754166       1 upstream.go:1044] message: 0da9aece-b115-4837-a6c6-72f10ac5080c process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
    E0625 11:40:04.752691       1 upstream.go:1044] message: 8a970bd2-046e-4064-93b4-61715908c587 process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
    E0625 11:40:14.755195       1 upstream.go:1044] message: 948fdeeb-082c-490e-ad4b-d18325f106ec process failure, patch pod failed with error: pods "edge-eclipse-mosquitto-9tfph" not found, namespace: kubeedge, name: edge-eclipse-mosquitto-9tfph
  3. edgecore日志:

    kube@edge1:~$ journalctl -u edgecore.service -xe > ./edgecore_log.txt && tail -10 ./edgecore_log.txt 
    6月 25 11:41:06 edge1 edgecore[26437]: I0625 11:41:06.738226   26437 scope.go:117] "RemoveContainer" containerID="74ce929bf767906facb4182e9d488a60a1d204c22c7046a68493ce0bb9d8674f"
    6月 25 11:41:06 edge1 edgecore[26437]: E0625 11:41:06.738468   26437 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=controller pod=controller-8d9cf599f-zb6bk_metallb-system(5894e0b4-ff9a-458f-8977-5e8592a55ecb)\"" pod="metallb-system/controller-8d9cf599f-zb6bk" podUID="5894e0b4-ff9a-458f-8977-5e8592a55ecb"
    6月 25 11:41:14 edge1 edgecore[26437]: I0625 11:41:14.748232   26437 status_manager.go:877] "Failed to update status for pod" pod="kubeedge/edge-eclipse-mosquitto-9tfph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a227c19-7e9a-46f7-9a9e-4cceacbc969d\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"image\\\":\\\"eclipse-mosquitto:1.6.15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted.  The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"edge-eclipse-mosquitto\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was terminated\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}}}]}}\" for pod \"kubeedge\"/\"edge-eclipse-mosquitto-9tfph\": pods \"edge-eclipse-mosquitto-9tfph\" not found"
    6月 25 11:41:15 edge1 edgecore[26437]: E0625 11:41:15.934134   26437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14446fc4-f073-46d2-abc3-cd6b67e18d04-config podName:14446fc4-f073-46d2-abc3-cd6b67e18d04 nodeName:}" failed. No retries permitted until 2024-06-25 11:43:17.934109377 +0800 CST m=+2213.595599247 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/14446fc4-f073-46d2-abc3-cd6b67e18d04-config") pod "onvif-mapper-test-7b58f6d4dc-d7vlb" (UID: "14446fc4-f073-46d2-abc3-cd6b67e18d04") : configmap references non-existent config key: configData
    6月 25 11:41:18 edge1 edgecore[26437]: I0625 11:41:18.735477   26437 scope.go:117] "RemoveContainer" containerID="74ce929bf767906facb4182e9d488a60a1d204c22c7046a68493ce0bb9d8674f"
    6月 25 11:41:18 edge1 edgecore[26437]: E0625 11:41:18.735814   26437 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=controller pod=controller-8d9cf599f-zb6bk_metallb-system(5894e0b4-ff9a-458f-8977-5e8592a55ecb)\"" pod="metallb-system/controller-8d9cf599f-zb6bk" podUID="5894e0b4-ff9a-458f-8977-5e8592a55ecb"
    6月 25 11:41:24 edge1 edgecore[26437]: I0625 11:41:24.747546   26437 status_manager.go:877] "Failed to update status for pod" pod="kubeedge/edge-eclipse-mosquitto-9tfph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a227c19-7e9a-46f7-9a9e-4cceacbc969d\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"image\\\":\\\"eclipse-mosquitto:1.6.15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted.  The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"edge-eclipse-mosquitto\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was terminated\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}}}]}}\" for pod \"kubeedge\"/\"edge-eclipse-mosquitto-9tfph\": pods \"edge-eclipse-mosquitto-9tfph\" not found"
    6月 25 11:41:29 edge1 edgecore[26437]: I0625 11:41:29.746309   26437 scope.go:117] "RemoveContainer" containerID="74ce929bf767906facb4182e9d488a60a1d204c22c7046a68493ce0bb9d8674f"
    6月 25 11:41:29 edge1 edgecore[26437]: E0625 11:41:29.746543   26437 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=controller pod=controller-8d9cf599f-zb6bk_metallb-system(5894e0b4-ff9a-458f-8977-5e8592a55ecb)\"" pod="metallb-system/controller-8d9cf599f-zb6bk" podUID="5894e0b4-ff9a-458f-8977-5e8592a55ecb"
    6月 25 11:41:34 edge1 edgecore[26437]: I0625 11:41:34.753796   26437 status_manager.go:877] "Failed to update status for pod" pod="kubeedge/edge-eclipse-mosquitto-9tfph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a227c19-7e9a-46f7-9a9e-4cceacbc969d\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"image\\\":\\\"eclipse-mosquitto:1.6.15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted.  The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"edge-eclipse-mosquitto\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was terminated\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}}}]}}\" for pod \"kubeedge\"/\"edge-eclipse-mosquitto-9tfph\": pods \"edge-eclipse-mosquitto-9tfph\" not found"

    过程: 下面是我的操作:

  4. 创建容器镜像

    
    # 1. 修改main.go,由于没使用到os,需要修改
    vim ./cmd/main.go
    3 import (
    4         "errors"
    5         _"os"

2. 修改镜像仓库

vim ./Dockerfile_stream

FROM docker.m.daocloud.io/golang:1.20.10-bullseye AS builder FROM docker.m.daocloud.io/ubuntu:18.04

3. 创建镜像

docker build -f Dockerfile_stream -t onvif-mapper-image .


6. 镜像创建成功:

kube@master1:~/kubeEdge/code/onvif-mapper-self$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE onvif-mapper-image latest 5bdcb3b91e52 47 minutes ago 1.99GB


7. 应用配置文件1

修改deployment.yaml文件的nodeName、image,其中image为上面的镜像名onvif-mapper-image:latest

kubectl apply -f ./deployment.yaml


8. 应用配置文件2

kube@master1:~/kubeEdge/code/onvif-mapper-self/resource$ ls configmap.yaml deployment.yaml onvifdevice-instance.yaml onvifdevice-model.yaml secret.yaml

1. 修改secret.yaml中data.password字段,为摄像头的密码

2. 修改onvifdevice-instance.yaml中的url、用户名、边缘节点名

3. 应用yaml

kubectl apply -f ./onvifdevice-model.yaml kubectl apply -f ./onvifdevice-instance.yaml

**相关文件**:
下面是与Onvif-mapper相关的配置文件:
1. configmap.yaml

kube@master1:~/kubeEdge/code/onvif-mapper-self/resource$ cat ./configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: cm-mapper data: configData: | grpc_server: socket_path: /etc/kubeedge/onvif.sock common: name: Onvif-mapper version: v1.13.0 api_version: v1.0.0 protocol: onvif # TODO add your protocol name address: 127.0.0.1 edgecore_sock: /etc/kubeedge/dmi.sock

2. deployment.yaml

kube@master1:~/kubeEdge/code/onvif-mapper-self/resource$ cat ./deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: onvif-mapper-test namespace: default spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: nodeName: edge1 # replace with your edge node name containers:

import ( "errors" _"os"

"k8s.io/klog/v2"

"github.com/kubeedge/onvif/device"
"github.com/kubeedge/mapper-framework/pkg/common"
"github.com/kubeedge/mapper-framework/pkg/config"
"github.com/kubeedge/mapper-framework/pkg/grpcclient"
"github.com/kubeedge/mapper-framework/pkg/grpcserver"
"github.com/kubeedge/mapper-framework/pkg/httpserver"

)


通过报错,我发现edgecore的容器和日志中的容器不一样,以及metallb-system相关的容器未成功运行。但是这些容器在我执行Onvif-mapper都是正常的。另外我想问下我上面执行onvif-mapper的操作是否正确?
wbc6080 commented 3 months ago

What is the log of onvif mapper pod?

Walterbishop233 commented 3 months ago

What is the log of onvif mapper pod?

我重新部署了一下onvif-mapper,现在pod状态是ImagePullBackOff,查询edgecore的日志报错如下:

 987 6月 26 20:50:18 edge1 edgecore[3438]: E0626 20:50:18.843357    3438 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \     "docker.io/library/onvif-mapper-image:latest\": failed to resolve reference \"docker.io/library/onvif-mapper-image:latest\": pull access denied, repository does not exist or may require authorizatio     n: server message: insufficient_scope: authorization failed" image="onvif-mapper-image:latest" 
 988 6月 26 20:50:18 edge1 edgecore[3438]: E0626 20:50:18.843431    3438 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/onvif-mapper-image:latest     \": failed to resolve reference \"docker.io/library/onvif-mapper-image:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: autho     rization failed" image="onvif-mapper-image:latest"
 989 6月 26 20:50:18 edge1 edgecore[3438]: E0626 20:50:18.843836    3438 kuberuntime_manager.go:1254] container &Container{Name:demo,Image:onvif-mapper-image:latest,Command:[/bin/sh -c],Args:[/kubeedge/m     ain --config-file /tmp/config/config.yaml --v 4],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{300 -3} {<nil>} 300m DecimalSI},memory:      {{524288000 0} {<nil>} 500Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]Volum     eMount{VolumeMount{Name:test-volume,ReadOnly:false,MountPath:/etc/kubeedge,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:config,ReadOnly:false,MountPath:/tmp/config,SubPath:,MountPro     pagation:nil,SubPathExpr:,},VolumeMount{Name:secret,ReadOnly:true,MountPath:/etc/secret,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h4tqr,ReadOnly:true,MountPath:/v     ar/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPol     icy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]Cont     ainerResizePolicy{},RestartPolicy:nil,} 
start failed in pod mapper-test-7b58f6d4dc-tb498_default(8d60cb1b-fe8d-4b34-8800-33e9795c87f7): ErrImagePull: failed to pull and unpack image "docker.io/libra     ry/onvif-mapper-image:latest": failed to resolve reference "docker.io/library/onvif-mapper-image:latest": pull access denied, repository does not exist or may require authorization: server message:      insufficient_scope: authorization failed
 990 6月 26 20:50:18 edge1 edgecore[3438]: E0626 20:50:18.843905    3438 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"demo\" with ErrImagePull: \"failed to p     ull and unpack image \\\"docker.io/library/onvif-mapper-image:latest\\\": failed to resolve reference \\\"docker.io/library/onvif-mapper-image:latest\\\": pull access denied, repository does not exi     st or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/mapper-test-7b58f6d4dc-tb498" podUID="8d60cb1b-fe8d-4b34-8800-33e9795c87f7"
 991 6月 26 20:50:30 edge1 edgecore[3438]: E0626 20:50:30.381978    3438 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"demo\" with ImagePullBackOff: \"Back-of     f pulling image \\\"onvif-mapper-image:latest\\\"\"" pod="default/mapper-test-7b58f6d4dc-tb498" podUID="8d60cb1b-fe8d-4b34-8800-33e9795c87f7"

这个镜像我是根据onvif-mapper的Dockerfile构建的,并且将其拷贝到edge1节点,加载出来了。相关过程如下:

#master1节点
docker build -f Dockerfile_stream -t onvif-mapper-image .
#构建成功后,拷贝到edge1节点并加载
#edge1节点
kube@edge1:~$ docker images
REPOSITORY                      TAG       IMAGE ID       CREATED        SIZE
onvif-mapper-image              latest    f58dbded2eb4   2 hours ago    1.99GB

同时我也在/resource/deployment.yaml中将image字段更改为我构建的镜像,完整的deployment.yaml文件如下:

kube@master1:~/kubeEdge/code/onvif-mapper/resource$ cat ./deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mapper-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      nodeName: edge1 # replace with your edge node name
      containers:
        - name: demo
          volumeMounts: # Required, mapper need to communicate with grpcclient and get the config
            - name: test-volume
              mountPath: /etc/kubeedge
            - name: config
              mountPath: /tmp/config
            - name: secret
              mountPath: /etc/secret
              readOnly: true
          image: onvif-mapper-image:latest # Replace with your mapper image name
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 300m
              memory: 500Mi
            requests:
              cpu: 100m
              memory: 100Mi
          command: [ "/bin/sh","-c" ]
          args: [ "/kubeedge/main --config-file /tmp/config/config.yaml --v 4" ]
      volumes:
        - name: test-volume
          hostPath:
            path: /etc/kubeedge
            type: Directory
        - name: config
          configMap:
            name: cm-mapper
            items:
              - key: configData
                path: config.yaml
        - name: secret
          secret:
            secretName: mysecret

问题:我已经在deployment.yaml中将镜像名更改为自己构建的镜像,为什么还是会出现ImagePullBackOff的错误,原始的image字段值为docker.io/library/onvif-mapper:v1.0.0

Walterbishop233 commented 3 months ago

What is the log of onvif mapper pod?

我成功解决了ImagePullBackOff错误,但是onvif-mapper出现CrashLoopBackOff问题,查看日志如下:

kube@master1:~/kubeEdge/code/onvif-mapper/resource$ kubectl get pods -A
NAMESPACE        NAME                              READY   STATUS             RESTARTS        AGE
default          mapper-test-7b58f6d4dc-wdwqm      0/1     CrashLoopBackOff   3 (22s ago)     2m7s
kube-flannel     kube-flannel-cloud-ds-lnjlx       1/1     Running            3 (103m ago)    3d1h
kube-flannel     kube-flannel-edge-ds-gr8jn        1/1     Running            4 (118m ago)    24h
kube-system      coredns-7bdc4cb885-j5w8m          1/1     Running            3 (103m ago)    3d1h
kube-system      coredns-7bdc4cb885-rgs9n          1/1     Running            3 (103m ago)    3d1h
kube-system      etcd-master1                      1/1     Running            37 (103m ago)   3d1h
kube-system      kube-apiserver-master1            1/1     Running            47 (103m ago)   3d1h
kube-system      kube-controller-manager-master1   1/1     Running            50 (103m ago)   3d1h
kube-system      kube-proxy-t9xk5                  1/1     Running            3 (103m ago)    25h
kube-system      kube-scheduler-master1            1/1     Running            54 (103m ago)   3d1h
kubeedge         cloudcore-dc75f4b46-xn7sm         1/1     Running            0               93m
kubeedge         edge-eclipse-mosquitto-nxwz4      1/1     Running            4 (118m ago)    23h
metallb-system   controller-8d9cf599f-nxtq6        1/1     Running            1 (103m ago)    148m
metallb-system   speaker-bwxhk                     1/1     Running            0               94m
kube@master1:~/kubeEdge/code/onvif-mapper/resource$ kubectl logs mapper-test-7b58f6d4dc-wdwqm 
/kubeedge/main: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /kubeedge/main)
/kubeedge/main: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /kubeedge/main)
wbc6080 commented 3 months ago

In addition to using docker image deployment, we can also deploy mapper by compiling and running locally. Could you help test whether the local compilation and running are normal?

Walterbishop233 commented 3 months ago

In addition to using docker image deployment, we can also deploy mapper by compiling and running locally. Could you help test whether the local compilation and running are normal?

我修改了Dockerfile文件,使用docker image镜像的方式成功部署上去了,但是仍然报错,onvif-mapper的报错为Init device default/onvif-device-01 error: Failed to load certificate

此外,我参考博客开启了摄像头的onvif协议。

Dockerfile文件如下:

FROM docker.m.daocloud.io/golang:1.20.10-bullseye AS builder

WORKDIR /build

ENV GO111MODULE=on \
    GOPROXY=https://goproxy.cn,direct

COPY . .

ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
    apt-get install -y bzip2 curl upx-ucl gcc-aarch64-linux-gnu libc6-dev-arm64-cross gcc-arm-linux-gnueabi libc6-dev-armel-cross libva-dev libva-drm2 libx11-dev libvdpau-dev libxext-dev libsdl1.2-dev libxcb1-dev libxau-dev libxdmcp-dev yasm gcc make

RUN curl -sLO https://ffmpeg.org/releases/ffmpeg-4.1.6.tar.bz2 && \
    tar -jx --strip-components=1 -f ffmpeg-4.1.6.tar.bz2 &&  \
    ./configure &&  make && \
    make install

RUN GOOS=linux go build -o main cmd/main.go

FROM docker.m.daocloud.io/ubuntu:20.04
RUN mkdir -p kubeedge

ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
    apt-get install -y bzip2 curl upx-ucl gcc-aarch64-linux-gnu libc6-dev-arm64-cross gcc-arm-linux-gnueabi libc6-dev-armel-cross libva-dev libva-drm2 libx11-dev libvdpau-dev libxext-dev libsdl1.2-dev libxcb1-dev libxau-dev libxdmcp-dev yasm gcc make

RUN curl -sLO https://ffmpeg.org/releases/ffmpeg-4.1.6.tar.bz2 && \
    tar -jx --strip-components=1 -f ffmpeg-4.1.6.tar.bz2 &&  \
    ./configure &&  make && \
    make install

COPY --from=builder /build/main kubeedge/
COPY ./config.yaml kubeedge/

WORKDIR kubeedge

下面是onvif-mapper的pod日志:

kube@master1:~/kubeEdge/code/onvif-mapper$ kubectl logs mapper-test-7b58f6d4dc-4wjvk 
I0627 13:18:39.364467       7 main.go:27] config: &{GrpcServer:{SocketPath:/etc/kubeedge/onvif.sock} Common:{Name:Onvif-mapper Version:v1.13.0 APIVersion:v1.0.0 Protocol:onvif Address:127.0.0.1 EdgeCoreSock:/etc/kubeedge/dmi.sock HTTPPort:}}
I0627 13:18:39.365009       7 main.go:29] Mapper will register to edgecore
I0627 13:18:39.439551       7 main.go:34] Mapper register finished
I0627 13:18:39.440876       7 grpc.go:83] In buildPropertiesFromGrpc, PropertyVisitors = [name:"getURI"  desired:{}  visitors:{protocolName:"onvif"  configData:{data:{key:"dataType"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"string"}}}  data:{key:"password"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"/etc/secret/password"}}}  data:{key:"url"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"192.168.254.4:80"}}}  data:{key:"userName"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"admin"}}}}}  reportCycle:10000000000  collectCycle:10000000000  reportToCloud:true name:"saveFrame"  desired:{}  visitors:{protocolName:"onvif"  configData:{data:{key:"dataType"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"stream"}}}  data:{key:"format"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"jpg"}}}  data:{key:"frameCount"  value:{[type.googleapis.com/google.protobuf.FloatValue]:{value:30}}}  data:{key:"frameInterval"  value:{[type.googleapis.com/google.protobuf.FloatValue]:{value:1e+06}}}  data:{key:"outputDir"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"/tmp/case/"}}}}} name:"saveVideo"  desired:{}  visitors:{protocolName:"onvif"  configData:{data:{key:"dataType"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"stream"}}}  data:{key:"format"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"mp4"}}}  data:{key:"frameCount"  value:{[type.googleapis.com/google.protobuf.FloatValue]:{value:1000}}}  data:{key:"outputDir"  value:{[type.googleapis.com/google.protobuf.StringValue]:{value:"/tmp/case/"}}}  data:{key:"videoNum"  value:{[type.googleapis.com/google.protobuf.FloatValue]:{value:2}}}}}]
I0627 13:18:39.441091       7 grpc.go:271] final instance data from grpc = &{default/onvif-device-01 onvif-device-01 default onvif-onvif-device-01 { []} onvif-model [{getURI 0xc0000e81a0 { { }} { { }}} {saveFrame 0xc0000e8340 { { }} { { }}} {saveVideo 0xc0000e84e0 { { }} { { }}}] [{getURI getURI onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 105 110 103 34 44 34 112 97 115 115 119 111 114 100 34 58 34 47 101 116 99 47 115 101 99 114 101 116 47 112 97 115 115 119 111 114 100 34 44 34 117 114 108 34 58 34 49 57 50 46 49 54 56 46 50 53 52 46 52 58 56 48 34 44 34 117 115 101 114 78 97 109 101 34 58 34 97 100 109 105 110 34 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] true 10000000000 10000000000 { [] { {[] [] [] [] []}}} {getURI STRING get camera uri ReadOnly   }} {saveFrame saveFrame onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 101 97 109 34 44 34 102 111 114 109 97 116 34 58 34 106 112 103 34 44 34 102 114 97 109 101 67 111 117 110 116 34 58 51 48 44 34 102 114 97 109 101 73 110 116 101 114 118 97 108 34 58 49 48 48 48 48 48 48 44 34 111 117 116 112 117 116 68 105 114 34 58 34 47 116 109 112 47 99 97 115 101 47 34 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] false 0 0 { [] { {[] [] [] [] []}}} {saveFrame STREAM get camera uri ReadOnly   }} {saveVideo saveVideo onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 101 97 109 34 44 34 102 111 114 109 97 116 34 58 34 109 112 52 34 44 34 102 114 97 109 101 67 111 117 110 116 34 58 49 48 48 48 44 34 111 117 116 112 117 116 68 105 114 34 58 34 47 116 109 112 47 99 97 115 101 47 34 44 34 118 105 100 101 111 78 117 109 34 58 50 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] false 0 0 { [] { {[] [] [] [] []}}} {saveVideo STREAM get camera uri ReadOnly   }}]}
I0627 13:18:39.441181       7 main.go:41] devInit finished
I0627 13:18:39.441227       7 device.go:64] Dev: default/onvif-device-01&{{default/onvif-device-01 onvif-device-01 default onvif-onvif-device-01 {onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 112 97 115 115 119 111 114 100 34 58 34 47 101 116 99 47 115 101 99 114 101 116 47 112 97 115 115 119 111 114 100 34 44 34 117 114 108 34 58 34 49 57 50 46 49 54 56 46 50 53 52 46 52 58 56 48 34 44 34 117 115 101 114 78 97 109 101 34 58 34 97 100 109 105 110 34 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125]} onvif-model [{getURI 0xc0000e81a0 { { }} { { }}} {saveFrame 0xc0000e8340 { { }} { { }}} {saveVideo 0xc0000e84e0 { { }} { { }}}] [{getURI getURI onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 105 110 103 34 44 34 112 97 115 115 119 111 114 100 34 58 34 47 101 116 99 47 115 101 99 114 101 116 47 112 97 115 115 119 111 114 100 34 44 34 117 114 108 34 58 34 49 57 50 46 49 54 56 46 50 53 52 46 52 58 56 48 34 44 34 117 115 101 114 78 97 109 101 34 58 34 97 100 109 105 110 34 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] true 10000000000 10000000000 { [] { {[] [] [] [] []}}} {getURI STRING get camera uri ReadOnly   }} {saveFrame saveFrame onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 101 97 109 34 44 34 102 111 114 109 97 116 34 58 34 106 112 103 34 44 34 102 114 97 109 101 67 111 117 110 116 34 58 51 48 44 34 102 114 97 109 101 73 110 116 101 114 118 97 108 34 58 49 48 48 48 48 48 48 44 34 111 117 116 112 117 116 68 105 114 34 58 34 47 116 109 112 47 99 97 115 101 47 34 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] false 0 0 { [] { {[] [] [] [] []}}} {saveFrame STREAM get camera uri ReadOnly   }} {saveVideo saveVideo onvif-model onvif [123 34 99 111 110 102 105 103 68 97 116 97 34 58 123 34 100 97 116 97 84 121 112 101 34 58 34 115 116 114 101 97 109 34 44 34 102 111 114 109 97 116 34 58 34 109 112 52 34 44 34 102 114 97 109 101 67 111 117 110 116 34 58 49 48 48 48 44 34 111 117 116 112 117 116 68 105 114 34 58 34 47 116 109 112 47 99 97 115 101 47 34 44 34 118 105 100 101 111 78 117 109 34 58 50 125 44 34 112 114 111 116 111 99 111 108 78 97 109 101 34 58 34 111 110 118 105 102 34 125] false 0 0 { [] { {[] [] [] [] []}}} {saveVideo STREAM get camera uri ReadOnly   }}]} <nil>}
I0627 13:18:39.441495       7 server.go:35] uds socket path: /etc/kubeedge/onvif.sock
I0627 13:18:39.441503       7 server.go:66] init uds socket: /etc/kubeedge/onvif.sock
I0627 13:18:39.442818       7 server.go:50] start grpc server
E0627 13:18:39.443109       7 device.go:102] Init device default/onvif-device-01 error: Failed to load certificate
I0627 13:18:39.443596       7 server.go:62] Insecure communication, skipping server verification

下面是edgecore端的日志:

6月 27 21:18:35 edge1 edgecore[802]: I0627 21:18:35.573595     802 storage.go:212] [metaserver/reststorage] successfully apply for a watch listener (/api/v1/nodes) through cloud
6月 27 21:18:35 edge1 edgecore[802]: W0627 21:18:35.573666     802 watcher.go:93] base storage now only support rev == 0, but get rev == 49739, force set to 0!
6月 27 21:18:35 edge1 edgecore[802]: I0627 21:18:35.574272     802 watcher.go:230] start watching, rev:0
6月 27 21:18:35 edge1 edgecore[802]: I0627 21:18:35.574290     802 imitator.go:176] /v1, Resource=nodes,,
6月 27 21:18:35 edge1 edgecore[802]: W0627 21:18:35.575740     802 watcher.go:188] get 2 obj in key /core/v1/nodes/null/null
6月 27 21:18:35 edge1 edgecore[802]: I0627 21:18:35.576367     802 watcher.go:200] get storage revision:49739
6月 27 21:18:38 edge1 edgecore[802]: E0627 21:18:38.222714     802 edged.go:311] resType is not pod or configmap or secret or volume: resType is configmap
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.226205     802 process.go:305] DeviceTwin receive msg
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.226498     802 process.go:70] Send msg to the MemModule module in twin
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.226622     802 membership.go:120] Membership event
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.227243     802 membership.go:169] Add devices to edge group
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.233709     802 eventbus.go:98] Success in pubMQTT with topic: $hw/events/node/edge1/membership/updated
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.295406     802 topology_manager.go:215] "Topology Admit Handler" podUID="612c3188-b83e-428a-98ed-2369452a9157" podNamespace="default" podName="mapper-test-7b58f6d4dc-4wjvk"
6月 27 21:18:38 edge1 edgecore[802]: E0627 21:18:38.297866     802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a600d0be-a1bd-4a26-8c1e-26f46c6592cb" containerName="demo"
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.299446     802 memory_manager.go:346] "RemoveStaleState removing state" podUID="a600d0be-a1bd-4a26-8c1e-26f46c6592cb" containerName="demo"
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.432307     802 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/612c3188-b83e-428a-98ed-2369452a9157-test-volume\") pod \"mapper-test-7b58f6d4dc-4wjvk\" (UID: \"612c3188-b83e-428a-98ed-2369452a9157\") " pod="default/mapper-test-7b58f6d4dc-4wjvk"
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.432361     802 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612c3188-b83e-428a-98ed-2369452a9157-config\") pod \"mapper-test-7b58f6d4dc-4wjvk\" (UID: \"612c3188-b83e-428a-98ed-2369452a9157\") " pod="default/mapper-test-7b58f6d4dc-4wjvk"
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.432380     802 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret\" (UniqueName: \"kubernetes.io/secret/612c3188-b83e-428a-98ed-2369452a9157-secret\") pod \"mapper-test-7b58f6d4dc-4wjvk\" (UID: \"612c3188-b83e-428a-98ed-2369452a9157\") " pod="default/mapper-test-7b58f6d4dc-4wjvk"
6月 27 21:18:38 edge1 edgecore[802]: I0627 21:18:38.432399     802 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvbl\" (UniqueName: \"kubernetes.io/projected/612c3188-b83e-428a-98ed-2369452a9157-kube-api-access-4wvbl\") pod \"mapper-test-7b58f6d4dc-4wjvk\" (UID: \"612c3188-b83e-428a-98ed-2369452a9157\") " pod="default/mapper-test-7b58f6d4dc-4wjvk"
6月 27 21:18:38 edge1 edgecore[802]: E0627 21:18:38.554407     802 serviceaccount.go:112] query meta "default"/"default"/[]string(nil)/3607/v1.BoundObjectReference{Kind:"Pod", APIVersion:"v1", Name:"mapper-test-7b58f6d4dc-4wjvk", UID:"612c3188-b83e-428a-98ed-2369452a9157"} length error
6月 27 21:18:39 edge1 edgecore[802]: I0627 21:18:39.244556     802 process.go:305] DeviceTwin receive msg
6月 27 21:18:39 edge1 edgecore[802]: I0627 21:18:39.244650     802 process.go:70] Send msg to the DMIModule module in twin
6月 27 21:18:39 edge1 edgecore[802]: W0627 21:18:39.247615     802 logging.go:59] [core] [Channel #42 SubChannel #43] grpc: addrConn.createTransport failed to connect to {Addr: "/etc/kubeedge/onvif.sock", ServerName: "/etc/kubeedge/onvif.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: E0627 21:18:39.248845     802 dmiworker.go:169] add device model onvif-model failed with err: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: E0627 21:18:39.248910     802 dmiworker.go:84] DMIModule deal MetaDeviceOperation event failed: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: I0627 21:18:39.259310     802 process.go:305] DeviceTwin receive msg
6月 27 21:18:39 edge1 edgecore[802]: I0627 21:18:39.259419     802 process.go:70] Send msg to the DMIModule module in twin
6月 27 21:18:39 edge1 edgecore[802]: W0627 21:18:39.259941     802 logging.go:59] [core] [Channel #44 SubChannel #45] grpc: addrConn.createTransport failed to connect to {Addr: "/etc/kubeedge/onvif.sock", ServerName: "/etc/kubeedge/onvif.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: E0627 21:18:39.259987     802 dmiworker.go:132] add device onvif-device-01 failed with err: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: E0627 21:18:39.259997     802 dmiworker.go:84] DMIModule deal MetaDeviceOperation event failed: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /etc/kubeedge/onvif.sock: connect: connection refused"
6月 27 21:18:39 edge1 edgecore[802]: I0627 21:18:39.436134     802 server.go:230] success to save mapper info of Onvif-mapper to db
6月 27 21:18:51 edge1 edgecore[802]: I0627 21:18:51.438986     802 edgedlogconnection.go:141] receive stop signal, so stop logs scan ...
6月 27 21:18:57 edge1 edgecore[802]: I0627 21:18:57.581657     802 edgedlogconnection.go:141] receive stop signal, so stop logs scan ...

请问社区能否提供一份详细的教程,用于展示如何通过onvif-mapper连接摄像头,并获取摄像头拍摄的图像数据。

wbc6080 commented 2 months ago

Init device default/onvif-device-01 error: Failed to load certificate

The camera authentication failed. Check whether the username and password have been set for the camera. Whether the username and password value configured in the yaml file is correct and whether the password has been base64 encoded.