Open-IoT-Service-Platform / platform-launcher

Apache License 2.0
16 stars 16 forks source link

recipe for target 'deploy-oisp-test' failed #365

Closed ismouhi closed 3 years ago

ismouhi commented 3 years ago

Hi i launched command make deploy-oisp-test and got this error Error: validation failed: unable to recognize "": no matches for kind "MinIOInstance" in version "miniocontroller.min.io/v1beta1" Makefile:91: recipe for target 'deploy-oisp-test' failed make: *** [deploy-oisp-test] Error 1

image

Help appreciated! Thanks!

oguzcankirmemis commented 3 years ago

Hi @ismouhi, Thanks for notifying us. Just to be sure, which branch were you using, the master?

ismouhi commented 3 years ago

Hi v1.2-beta.1

wagmarcel commented 3 years ago

Hi @ismouhi, On what host system are you building? Ubuntu18.04?

ismouhi commented 3 years ago

Hi @ismouhi, On what host system are you building? Ubuntu18.04?

Hi @wagmarcel Yes Ubuntu 18.04.3

wagmarcel commented 3 years ago

@ismouhi, OK - I just tried it on a vanilla 18.04 and can reproduce it. I realized that the documentation is outdated and does not reflect recent changes in the build process. I will give you the updated commands soon.

wagmarcel commented 3 years ago

Ok - there are several challenges. One is that we had to remove public access to the oisp dockerhub repos. Therefore, you need an additional step in between to import the images into containerd to run it locally. I tried out the following commands on a vanilla Ubunu 18.04 for v2.0.0-beta.1 version:

sudo apt install net-tools
git clone https://github.com/Open-IoT-Service-Platform/platform-launcher.git

cd platform-launcher/
git checkout v2.0.0-beta.1
cd util/
sudo bash ./setup-ubuntu18.04.sh
cd ..
git submodule update --init --recursive
export NODOCKERLOGIN=true

sudo -E DEBUG=true  DOCKER_TAG=v2.0.0-beta.1 make build
sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 import-images
sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 deploy-oisp-test
sudo make test

in case you have to repeat the deploy command or you get an error message like Error from server (AlreadyExists): namespaces "oisp" already exists Then undeploy first and try again deployment

sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 undeploy-oisp
ismouhi commented 3 years ago

Hi @wagmarcel i got this error message at the end of make build

ERROR: for debugger (, "The command '/bin/sh -c apt-get install -y kubectl' returned a non-zero code: 100") ERROR: Service 'debugger' failed to build: The command '/bin/sh -c apt-get install -y kubectl' returned a non-zero code: 100 Makefile:320: recipe for target 'build' failed make: *** [build] Error 1 image

wagmarcel commented 3 years ago

@ismouhi ... and they said that building with docker makes you independent of your host system :-) But seriously, this error message shows that the repository which contains the kubectl is not added successfully to the ubuntu:18.04 docker image. No idea what is causing this. I suspect that you have "legacy" in your system with older images with the same tag. (e.g. the ubuntu:18.04 image) you might want to clean your system with docker system prune -a but be aware that this is cleaning up a lot so if you have some other projects in parallel be careful with this. Another information which could help us would be to get the output from: sudo -E DEBUG=true DOCKER_TAG=v2.0.0-beta.1 make build CONTAINERS="debugger" (after removing the ubuntu:18.04 image e.g. with docker rmi ubuntu:18.04)

ismouhi commented 3 years ago

Hi @wagmarcel

after removing the ubuntu:18.04 image this is the output from sudo -E DEBUG=true DOCKER_TAG=v2.0.0-beta.1 make build CONTAINERS="debugger" :

------------------------------------------------------------------------------------------------------------------------
    Building OISP containers
------------------------------------------------------------------------------------------------------------------------
Step 1/21 : FROM ubuntu:18.04
 ---> 2c047404e52d
Step 2/21 : RUN apt-get update && apt-get -y install nano make git kafkacat python python-pip python3-pip python3-dev python3-setuptools ipython3 python-setuptools build-essential nodejs dnsutils virtualenv snapd npm wget apt-transport-https curl apache2-utils imagemagick gettext-base jq nginx
 ---> Using cache
 ---> b284fd19f559
Step 3/21 : RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 ---> Using cache
 ---> b9a4c573ffc9
Step 4/21 : RUN touch /etc/apt/sources.list.d/kubernetes.list
 ---> Using cache
 ---> fde2c3e7f50f
Step 5/21 : RUN echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
 ---> Using cache
 ---> 3bd9bf84a851
Step 6/21 : RUN apt-get update
 ---> Using cache
 ---> 86c72c5446d7
Step 7/21 : RUN apt-get install -y kubectl
 ---> Using cache
 ---> 10c2c84bc727
Step 8/21 : RUN npm install --global n
 ---> Using cache
 ---> 82a84abd156b
Step 9/21 : RUN n 8
 ---> Using cache
 ---> 54c812da3d7b
Step 10/21 : RUN pip3 install locust oisp shyaml
 ---> Using cache
 ---> 49503cc96255
Step 11/21 : RUN npm install -g fake-smtp-server
 ---> Using cache
 ---> 701efce45c9d
Step 12/21 : ENV OISP_REMOTE https://github.com/Open-IoT-Service-Platform/platform-launcher.git
 ---> Using cache
 ---> c7af54bbb93d
Step 13/21 : RUN mkdir /home/platform-launcher
 ---> Using cache
 ---> c50bc573005f
Step 14/21 : RUN wget https://github.com/tsenart/vegeta/releases/download/cli%2Fv12.1.0/vegeta-12.1.0-linux-amd64.tar.gz &&     tar -xzvf vegeta-12.1.0-linux-amd64.tar.gz &&     cp vegeta /usr/bin/vegeta
 ---> Using cache
 ---> 39180a91e01a
Step 15/21 : RUN mkdir /home/load-tests/
 ---> Using cache
 ---> 9f56c8015217
Step 16/21 : ADD load-tests/create_test.py /home/load-tests/create_test.py
 ---> Using cache
 ---> a43332a6b601
Step 17/21 : WORKDIR /home/platform-launcher
 ---> Using cache
 ---> 6914c67b22f2
Step 18/21 : EXPOSE 8089 5557 5558 80
 ---> Using cache
 ---> a881cb98610c
Step 19/21 : CMD ["tail", "-f", "/dev/null"]
 ---> Using cache
 ---> 6bb94446ec7a
Step 20/21 : LABEL oisp=true
 ---> Using cache
 ---> 5b1a4abadfe0
Step 21/21 : LABEL oisp.git_commit=
 ---> Using cache
 ---> 64dad2c9ad3a

Successfully built 64dad2c9ad3a
Successfully tagged oisp/debugger:v2.0.0-beta.1

on another machine i cleaned the system with docker system prune -a and it passed make build & import-images but i got this error in deploy-oisp-test image

wagmarcel commented 3 years ago

@ismouhi good to see that the build worked. For the deploy: I assume that you imported the images with sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 import-images. When you do kubectl -n oisp get pods do you see any image pulling errors? If there are no image pull issues, another reason could be that it took too long to download some of the default images and helm timeouted. But then the deployment should work when you try it a second time because some of the images have already been downloaded. So please try undeploy and deploy again:

sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 undeploy-oisp
sudo -E DEBUG=true make DOCKER_TAG=v2.0.0-beta.1 deploy-oisp-test
ismouhi commented 3 years ago

Hi @wagmarcel this is the ouput from kubectl -n oisp get pods: Screen Shot 2020-12-07 at 11 47 00 this is the output from undeploy-oisp

beamservice.oisp.org "component-splitter" deleted
beamservice.oisp.org "metrics-aggregator" deleted
beamservice.oisp.org "rule-engine" deleted
release "oisp" uninstalled
namespace "oisp" deleted
Error from server (NotFound): cassandradatacenters.cassandraoperator.instaclustr.com "oisp" not found
cassandra dc not (or already) deleted

and deploy-oisp-test still has the same error Screen Shot 2020-12-07 at 11 51 38

wagmarcel commented 3 years ago

@ismouhi what concerns me is the "evicted" and "pending" states. It is indicating resource problems. How much memory does your platform have? How many cores? how much storage? Can you please provide the details of one pod, e.g. the frontend pod? e.g. by

FRONTEND_POD=$(kubectl -n oisp get pods| grep frontend| cut -f 1 -d " ")
kubectl -n oisp describe pod $FRONTEND_POD
ismouhi commented 3 years ago

this is the details of the machine :

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              1
On-line CPU(s) list: 0
Thread(s) per core:  1
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               26
Model name:          Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz
Stepping:            5
CPU MHz:             2533.424
BogoMIPS:            5066.84
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            8192K
NUMA node0 CPU(s):   0

this is the frontend pod details :

Name:         frontend-7b7947665d-7fgx9
Namespace:    oisp
Priority:     0
Node:         3a4353b4ea82/172.18.0.3
Start Time:   Mon, 07 Dec 2020 13:25:11 +0100
Labels:       app=frontend
              pod-template-hash=7b7947665d
Annotations:  <none>
Status:       Pending
IP:           10.42.0.164
IPs:
  IP:           10.42.0.164
Controlled By:  ReplicaSet/frontend-7b7947665d
Containers:
  frontend:
    Container ID:  
    Image:         oisp/frontend:v2.0.0-beta.1
    Image ID:      
    Port:          4001/TCP
    Host Port:     0/TCP
    Args:
      ./wait-for-it.sh
      postgres:5432
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      redis:6379
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      oisp-kafka-headless:9092
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      keycloak-http:4080
      -t
      300000
      -s
      --
      ./scripts/docker-start.sh
      --disable-rate-limits
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      50m
    Liveness:   http-get http://:4001/v1/api/health delay=200s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:4001/v1/api/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      NODE_ENV:                      local
      OISP_FRONTEND_CONFIG:          <set to the key 'frontend' of config map 'oisp-config'>           Optional: false
      OISP_POSTGRES_CONFIG:          <set to the key 'postgres' of config map 'oisp-config'>           Optional: false
      OISP_REDIS_CONFIG:             <set to the key 'redis' of config map 'oisp-config'>              Optional: false
      OISP_KAFKA_CONFIG:             <set to the key 'kafka' of config map 'oisp-config'>              Optional: false
      OISP_SMTP_CONFIG:              <set to the key 'smtp' of config map 'oisp-config'>               Optional: false
      OISP_FRONTENDSECURITY_CONFIG:  <set to the key 'frontend-security' of config map 'oisp-config'>  Optional: false
      OISP_KEYCLOAK_CONFIG:          <set to the key 'keycloak' of config map 'oisp-config'>           Optional: false
      OISP_GATEWAY_CONFIG:           <set to the key 'gateway' of config map 'oisp-config'>            Optional: false
      OISP_BACKENDHOST_CONFIG:       <set to the key 'backend-host' of config map 'oisp-config'>       Optional: false
      OISP_WEBSOCKETUSER_CONFIG:     <set to the key 'websocket-user' of config map 'oisp-config'>     Optional: false
      OISP_RULEENGINE_CONFIG:        <set to the key 'rule-engine' of config map 'oisp-config'>        Optional: false
      OISP_MAIL_CONFIG:              <set to the key 'mail' of config map 'oisp-config'>               Optional: false
      OISP_GRAFANA_CONFIG:           <set to the key 'grafana' of config map 'oisp-config'>            Optional: false
    Mounts:
      /app/keys from jwt-keys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8787b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  jwt-keys:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oisp-secrets
    Optional:    false
  default-token-8787b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8787b
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From                   Message
  ----     ------     ----                  ----                   -------
  Normal   Scheduled  32m                   default-scheduler      Successfully assigned oisp/frontend-7b7947665d-7fgx9 to 3a4353b4ea82
  Warning  Failed     31m                   kubelet, 3a4353b4ea82  Failed to pull image "oisp/frontend:v2.0.0-beta.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/oisp/frontend:v2.0.0-beta.1": failed to resolve reference "docker.io/oisp/frontend:v2.0.0-beta.1": failed to do request: Head https://registry-1.docker.io/v2/oisp/frontend/manifests/v2.0.0-beta.1: dial tcp: lookup registry-1.docker.io: Try again
  Normal   Pulling    30m (x4 over 32m)     kubelet, 3a4353b4ea82  Pulling image "oisp/frontend:v2.0.0-beta.1"
  Warning  Failed     30m (x4 over 32m)     kubelet, 3a4353b4ea82  Error: ErrImagePull
  Warning  Failed     30m (x3 over 32m)     kubelet, 3a4353b4ea82  Failed to pull image "oisp/frontend:v2.0.0-beta.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/oisp/frontend:v2.0.0-beta.1": failed to resolve reference "docker.io/oisp/frontend:v2.0.0-beta.1": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Normal   BackOff    30m (x6 over 32m)     kubelet, 3a4353b4ea82  Back-off pulling image "oisp/frontend:v2.0.0-beta.1"
  Warning  Failed     2m9s (x126 over 32m)  kubelet, 3a4353b4ea82  Error: ImagePullBackOff
ismouhi commented 3 years ago

@wagmarcel do you recommend to increment the memory to 16 gb ?

wagmarcel commented 3 years ago

@ismouhi thanks for sharing. Minmal configuration would be 2 cores and 16GB memory. Otherwise you always risk that Kubernetes scheduler is evicting pods. But there is another thing which is strange: the frontend pods tried to pull from dockerhub. But this should not happen if the images are already imported (actually this is exactly the reason why importing is needed since we had to withdraw public dockerhub access). Did you reset the k3s cluster by chance? e.g. by calling sudo bash ./setup-ubuntu18.04.sh again? In this case you would have to import the images again to the k3s nodes.

ismouhi commented 3 years ago

@wagmarcel ok i will increase the configuration to 16gb an 4 cores and run the deploy again on this machine i run setup-ubuntu18.04.sh only one time , but import-images multiple times

wagmarcel commented 3 years ago

@ismouhi ok - let's see what you get with the larger platform. When you see the same error message on frontend pod (docker pull error) again, I suspect a typo of the tag name somewhere. But let's check then.

ismouhi commented 3 years ago

@wagmarcel it still the same error

namespace/oisp created
"codecentric" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "codecentric" chart repository
...Successfully got an update from the "incubator" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 3 charts
Downloading keycloak from repo https://codecentric.github.io/helm-charts
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Deleting outdated charts
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
Error: failed post-install: timed out waiting for the condition
Makefile:123: recipe for target 'deploy-oisp' failed
make[1]: *** [deploy-oisp] Error 1
make[1]: Leaving directory '/home/its/platform-launcher'
Makefile:102: recipe for target 'deploy-oisp-test' failed
make: *** [deploy-oisp-test] Error 2
Name:         frontend-7b7947665d-gfn6c
Namespace:    oisp
Priority:     0
Node:         0dd46348d717/172.18.0.2
Start Time:   Mon, 07 Dec 2020 17:13:30 +0100
Labels:       app=frontend
              pod-template-hash=7b7947665d
Annotations:  <none>
Status:       Pending
IP:           10.42.0.22
IPs:
  IP:           10.42.0.22
Controlled By:  ReplicaSet/frontend-7b7947665d
Containers:
  frontend:
    Container ID:  
    Image:         oisp/frontend:v2.0.0-beta.1
    Image ID:      
    Port:          4001/TCP
    Host Port:     0/TCP
    Args:
      ./wait-for-it.sh
      postgres:5432
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      redis:6379
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      oisp-kafka-headless:9092
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      keycloak-http:4080
      -t
      300000
      -s
      --
      ./scripts/docker-start.sh
      --disable-rate-limits
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      50m
    Liveness:   http-get http://:4001/v1/api/health delay=200s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:4001/v1/api/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      NODE_ENV:                      local
      OISP_FRONTEND_CONFIG:          <set to the key 'frontend' of config map 'oisp-config'>           Optional: false
      OISP_POSTGRES_CONFIG:          <set to the key 'postgres' of config map 'oisp-config'>           Optional: false
      OISP_REDIS_CONFIG:             <set to the key 'redis' of config map 'oisp-config'>              Optional: false
      OISP_KAFKA_CONFIG:             <set to the key 'kafka' of config map 'oisp-config'>              Optional: false
      OISP_SMTP_CONFIG:              <set to the key 'smtp' of config map 'oisp-config'>               Optional: false
      OISP_FRONTENDSECURITY_CONFIG:  <set to the key 'frontend-security' of config map 'oisp-config'>  Optional: false
      OISP_KEYCLOAK_CONFIG:          <set to the key 'keycloak' of config map 'oisp-config'>           Optional: false
      OISP_GATEWAY_CONFIG:           <set to the key 'gateway' of config map 'oisp-config'>            Optional: false
      OISP_BACKENDHOST_CONFIG:       <set to the key 'backend-host' of config map 'oisp-config'>       Optional: false
      OISP_WEBSOCKETUSER_CONFIG:     <set to the key 'websocket-user' of config map 'oisp-config'>     Optional: false
      OISP_RULEENGINE_CONFIG:        <set to the key 'rule-engine' of config map 'oisp-config'>        Optional: false
      OISP_MAIL_CONFIG:              <set to the key 'mail' of config map 'oisp-config'>               Optional: false
      OISP_GRAFANA_CONFIG:           <set to the key 'grafana' of config map 'oisp-config'>            Optional: false
    Mounts:
      /app/keys from jwt-keys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hrmkh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  jwt-keys:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oisp-secrets
    Optional:    false
  default-token-hrmkh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hrmkh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                     From                   Message
  ----     ------   ----                    ----                   -------
  Warning  Failed   37m (x26 over 147m)     kubelet, 0dd46348d717  Failed to pull image "oisp/frontend:v2.0.0-beta.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/oisp/frontend:v2.0.0-beta.1": failed to resolve reference "docker.io/oisp/frontend:v2.0.0-beta.1": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Normal   Pulling  32m (x27 over 147m)     kubelet, 0dd46348d717  Pulling image "oisp/frontend:v2.0.0-beta.1"
  Normal   BackOff  7m40s (x607 over 147m)  kubelet, 0dd46348d717  Back-off pulling image "oisp/frontend:v2.0.0-beta.1"
  Warning  Failed   2m47s (x628 over 147m)  kubelet, 0dd46348d717  Error: ImagePullBackOff
wagmarcel commented 3 years ago

@ismouhi ok - still it is trying to pull the image. For instance, on my systems it looks like this:

Normal   Pulled     4m34s                   kubelet            Container image "oisp/frontend:v2.0.0-beta.1" already present on machine

So, for a reason I don't understand, the image is not imported on your k3s agent. So let's take a look what has been imported to the agent. When you do docker ps you should see a container which is started with the command "/bin/k3s agent". Say the CONTAINER ID of this container is AGENT_ID. Then, can you please show me the output of docker exec -it AGENT_ID ctr i ls| grep frontend? And if it is empty, please provide docker exec -it AGENT_ID ctr i ls

ismouhi commented 3 years ago

Hi @wagmarcel the return of sudo docker exec -it 0dd46348d717 ctr i ls| grep frontend is empty. this is the output of docker exec -it 0dd46348d717 ctr i ls :


REF                                                                                                                  TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                   LABELS                          
docker.io/confluentinc/cp-kafka:5.0.1                                                                                application/vnd.oci.image.manifest.v1+json                sha256:1fc673d968118583a9512d4133fc9b5224cbbb4896d86b2214805ead5a261813 543.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/flink:1.7                                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/flink@sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39                      application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/redis:3.0                                                                                          application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/redis@sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20                      application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/zookeeper:3.5.5                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/zookeeper@sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9                  application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/k8s-operator:1.0.4                                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/k8s-operator@sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d                 application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/minio:RELEASE.2019-09-11T19-53-16Z                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/minio@sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c                        application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/rancher/coredns-coredns:1.6.9                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0            application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
docker.io/rancher/klipper-helm:v0.2.3                                                                                application/vnd.docker.distribution.manifest.list.v2+json sha256:a7c8cc34edc89609c1b11c1ab212962d437414e5542a8fa059df2936aaa1c06f 44.7 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/klipper-helm@sha256:a7c8cc34edc89609c1b11c1ab212962d437414e5542a8fa059df2936aaa1c06f               application/vnd.docker.distribution.manifest.list.v2+json sha256:a7c8cc34edc89609c1b11c1ab212962d437414e5542a8fa059df2936aaa1c06f 44.7 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/klipper-lb:v0.1.2                                                                                  application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a                 application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/library-traefik:1.7.19                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
docker.io/rancher/library-traefik@sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d            application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
docker.io/rancher/local-path-provisioner:v0.0.11                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/local-path-provisioner@sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81     application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/metrics-server:v0.3.6                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/metrics-server@sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e             application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/sorintlab/stolon:v0.12.0-pg10                                                                              application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/sorintlab/stolon@sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc                   application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-3.11.6:v6.4.0                                                                    application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-3.11.6@sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998   application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-operator:v6.7.0                                                                  application/vnd.oci.image.manifest.v1+json                sha256:f42a65ba1f0ab5ac8c8586a8105131bd0325b0ebf6bac8549fbf9c784fb370db 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-operator@sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 application/vnd.docker.distribution.manifest.v2+json      sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/instaclustr-icarus:1.0.1                                                                   application/vnd.oci.image.manifest.v1+json                sha256:f4e3298a563064827192070ad3f8e09ffda270bb01306efac3b2e152ff1d8fb5 350.4 MiB linux/amd64                                                 io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-cainjector:v0.14.1                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-cainjector@sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be     application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-controller:v0.14.1                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-controller@sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f     application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-webhook:v0.14.1                                                                        application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-webhook@sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e        application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:05762f87116f9c79ecb9f2adc9f89056e2ab174d67723a41530a6119608141a0                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:0e3eae99e9828146a23183a53697c0e917883415b887d2daecd51f4c1235fbf8                                              application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:1fefad8f31f0a6228049ab7ccce9a22ee786ac2aa2da69311580bf1584d8a084                                              application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:274808e7f6b8391860bf792e20e99df26d45fe2a7419336c516a809c0d8b6b7e                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:a7c8cc34edc89609c1b11c1ab212962d437414e5542a8fa059df2936aaa1c06f 44.7 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:3f81ba5de6184af15f457e23417d0a3f95272b3ada5ac9de684e029ce224ef82                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:43367913425e00adda84dd9e795fd8c76eb7f1699393a3d5034c0b93c02108e6                                              application/vnd.docker.distribution.manifest.v2+json      sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:4e797b3234604c31f729cb63b6128b623e2f76e629d53ccb84d899de4e73f759                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
sha256:5467234daea962117d54d1ede86349ad6cdb9eb21c114b983bd8c4a76abd1c4e                                              application/vnd.oci.image.manifest.v1+json                sha256:1fc673d968118583a9512d4133fc9b5224cbbb4896d86b2214805ead5a261813 543.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:5a65375c17a42f0e778645354a7e30d34deea44aa9ac211bf1230080e3a56eee                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:70baa4123245857a905c7e701ec1a9c4992f196693d7f073c2bdc0094369872c                                              application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:80d8990d7b83707ebea2eb8fbe4be4d313296b46e5c733b1b49edde8f2be7cd9                                              application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:8342b4e9dba3bb6d74ac806da2917129d70460256361c342a62401c6b1c40b8b                                              application/vnd.oci.image.manifest.v1+json                sha256:f4e3298a563064827192070ad3f8e09ffda270bb01306efac3b2e152ff1d8fb5 350.4 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:8509d633815112fca25be2257edaad455f98dcbc02bbcd34722e9bcc3a046eb4                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:897ce3c5fc8ff8b1ad4fe1e74f725a0942e8e1dad0ee09096cfa7320da889a53                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:9d12f9848b99f4e5f5271b7cac02790de44ec12014762a83987cfff7f3c26106                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:9dd718864ce61b4c0805eaf75f87b95302960e65d4857cb8b6591864394be55b                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:aa764f7db3051ee79467eeef28fe6a5f0667ae8af7bd6ead61f9b0ae3d8f638e                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:c44fa74ead882d6417e2736700dce8fdef2f12849d45f9f92023cf1d319a9ee4                                              application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:cb399bafceb40f2d6193790e5547bc7075da4497a586953578025792c8ecb468                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed
wagmarcel commented 3 years ago

@ismouhi this confirms that the built images are not imported. But why? When you use the make import-images command above, do you see any error message? The other explanation I have is that is is imported on the wrong node. Can you apply the same command on the k3s server node container in docker to list the images? Is the k3s server started with "--disable-agent" commandline switch?

ismouhi commented 3 years ago

@wagmarcel make import-images didn't return any errors :

websocket-server is saved, copied, imported
frontend is saved, copied, imported
backend is saved, copied, imported
streamer is saved, copied, imported
kairosdb is saved, copied, imported
mqtt-gateway is saved, copied, imported
mqtt-broker is saved, copied, imported
grafana is saved, copied, imported
keycloak is saved, copied, imported
services-operator is saved, copied, imported
services-server is saved, copied, imported
debugger is saved, copied, imported
gcr.io/cassandra-operator/cassandra-3.11.6:v6.4.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/cassandra-operator:v6.7.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/instaclustr-icarus:1.0.1, pulled, saved, copied, imported
confluentinc/cp-kafka:5.0.1, pulled, saved, copied, imported

yes the k3s server starts with --disable-agent this is the ouput from docker ps :

CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                    NAMES
0dd46348d717        rancher/k3s:v1.17.13-rc1-k3s1   "/bin/k3s agent"         21 hours ago        Up 21 hours                                  k3s_agent_1_2f32dcbfa725
ee43a51b04c1        rancher/k3s:v1.17.13-rc1-k3s1   "/bin/k3s server --d…"   21 hours ago        Up 21 hours         0.0.0.0:6443->6443/tcp   k3s_server_1_f6fc257405f0
wagmarcel commented 3 years ago

@ismouhi hm - it all looks right, except that the images are not imported ... Unfortunately, we discard some of the import-images output in the Makefile. When you look at the Makefile in the platform-launcher directory, you'll find the line:

docker exec -it $(K3S_NODE) ctr image import /tmp/$(image) >> /dev/null && printf ", imported\n"; \

please replace it by

docker exec -it $(K3S_NODE) ctr image import /tmp/$(image) && printf ", imported\n"; \

(make sure that the tabs at the beginning of the line are not changed) and try again to import the images.

ismouhi commented 3 years ago

@wagmarcel ok i replaced it should i run make build again or just from import-images ?

wagmarcel commented 3 years ago

@ismouhi just import images again

ismouhi commented 3 years ago

hi @wagmarcel still the same error output :

namespace/oisp created
"codecentric" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "codecentric" chart repository
...Successfully got an update from the "incubator" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 3 charts
Downloading keycloak from repo https://codecentric.github.io/helm-charts
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Deleting outdated charts
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
Error: failed post-install: timed out waiting for the condition
Makefile:123: recipe for target 'deploy-oisp' failed
make[1]: *** [deploy-oisp] Error 1
make[1]: Leaving directory '/home/its/platform-launcher'
Makefile:102: recipe for target 'deploy-oisp-test' failed
make: *** [deploy-oisp-test] Error 2
wagmarcel commented 3 years ago

@ismouhi Sorry, I mean the output when you do the make import-images command from above.

ismouhi commented 3 years ago
websocket-server is saved, copiedunpacking docker.io/oisp/websocket-server:v2.0.0-beta.1 (sha256:82d1a6dbcffd38c42fcf7e82e1406f3bdf8ef9c6fe653ba59fbacf95041b5769)...done
, imported
frontend is saved, copiedunpacking docker.io/oisp/frontend:v2.0.0-beta.1 (sha256:0578e11bd951f40689c1a69507c07e2c6665ace1d773ffded986008c340acd44)...done
, imported
backend is saved, copiedunpacking docker.io/oisp/backend:v2.0.0-beta.1 (sha256:c87de85e6d9a710488bbc4640d98412bf998bfff6aa7835fa2f486baa92d4567)...done
, imported
streamer is saved, copiedunpacking docker.io/oisp/streamer:v2.0.0-beta.1 (sha256:0f918e40174acec1edb225068ac74476d6404002d22a13b9c4cc43f3768861e4)...done
, imported
kairosdb is saved, copiedunpacking docker.io/oisp/kairosdb:v2.0.0-beta.1 (sha256:f8d568987d8214925183167f3ac1a27095f6342c4c88c4bc06276c8acedc4559)...done
, imported
mqtt-gateway is saved, copiedunpacking docker.io/oisp/mqtt-gateway:v2.0.0-beta.1 (sha256:d7d0b1bde7151c8289805235c953cff553e541757286a772a70f8f7af19082dd)...done
, imported
mqtt-broker is saved, copiedunpacking docker.io/oisp/mqtt-broker:v2.0.0-beta.1 (sha256:ccf77db7c11de288e6c94d359c3ce3ad669b7667e9971b88b42734b4c87121f9)...done
, imported
grafana is saved, copiedunpacking docker.io/oisp/grafana:v2.0.0-beta.1 (sha256:2a01443720d9b9448df3c8012cdf3df67ea3daa3759eade6d79fd04192a796ab)...done
, imported
keycloak is saved, copiedunpacking docker.io/oisp/keycloak:v2.0.0-beta.1 (sha256:262f1436914405f236a846dae3fca2f5892276c8774589f76a682649960db1a0)...done
, imported
services-operator is saved, copiedunpacking docker.io/oisp/services-operator:v2.0.0-beta.1 (sha256:4f08f6b105fd399ab923a76361598cb54601617df2654905eba9f0afefaef24d)...done
, imported
services-server is saved, copiedunpacking docker.io/oisp/services-server:v2.0.0-beta.1 (sha256:b8cc7c7ee7a742d2fe544ad78042c38aa2216ca9a157189971abfe38cfe84a2a)...done
, imported
debugger is saved, copiedunpacking docker.io/oisp/debugger:v2.0.0-beta.1 (sha256:86d2f9e9000812f3111bc9652bf35f5042da6b011155a5d602350f4f213e5ce1)...done
, imported
gcr.io/cassandra-operator/cassandra-3.11.6:v6.4.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/cassandra-operator:v6.7.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/instaclustr-icarus:1.0.1, pulled, saved, copied, imported
confluentinc/cp-kafka:5.0.1, pulled, saved, copied, imported
wagmarcel commented 3 years ago

@ismouhi it looks all healthy - but it is not imported to your cluster ... but where is it imported then? This is strange ... Assuming that the dockernames of the k3s cluster are the same, can you again provide me

docker exec -it k3s_server_1_f6fc257405f0 ctr i ls and docker exec -it k3s_agent_1_2f32dcbfa725 ctr i ls

ismouhi commented 3 years ago

the output of docker exec -it k3s_server_1_f6fc257405f0 ctr i ls :

ctr: failed to dial "/run/k3s/containerd/containerd.sock": context deadline exceeded

the output of docker exec -it k3s_agent_1_2f32dcbfa725 ctr i ls :

REF                                                                                                                  TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                   LABELS                          
docker.io/confluentinc/cp-kafka:5.0.1                                                                                application/vnd.docker.distribution.manifest.v2+json      sha256:c87b1c07fb53b1a82d24b436e53485917876a963dc67311800109fa12fe9a63d 266.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/confluentinc/cp-kafka@sha256:c87b1c07fb53b1a82d24b436e53485917876a963dc67311800109fa12fe9a63d              application/vnd.docker.distribution.manifest.v2+json      sha256:c87b1c07fb53b1a82d24b436e53485917876a963dc67311800109fa12fe9a63d 266.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/flink:1.7                                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/flink@sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39                      application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/redis:3.0                                                                                          application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/redis@sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20                      application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/zookeeper:3.5.5                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/library/zookeeper@sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9                  application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/k8s-operator:1.0.4                                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/k8s-operator@sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d                 application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/minio:RELEASE.2019-09-11T19-53-16Z                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/minio/minio@sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c                        application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/rancher/coredns-coredns:1.6.9                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
docker.io/rancher/coredns-coredns@sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0            application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
docker.io/rancher/klipper-lb:v0.1.2                                                                                  application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a                 application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/library-traefik:1.7.19                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
docker.io/rancher/library-traefik@sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d            application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
docker.io/rancher/local-path-provisioner:v0.0.11                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/local-path-provisioner@sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81     application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/metrics-server:v0.3.6                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/rancher/metrics-server@sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e             application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
docker.io/sorintlab/stolon:v0.12.0-pg10                                                                              application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
docker.io/sorintlab/stolon@sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc                   application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-3.11.6:v6.4.0                                                                    application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-3.11.6@sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998   application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-operator:v6.7.0                                                                  application/vnd.oci.image.manifest.v1+json                sha256:f42a65ba1f0ab5ac8c8586a8105131bd0325b0ebf6bac8549fbf9c784fb370db 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/cassandra-operator@sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 application/vnd.docker.distribution.manifest.v2+json      sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/instaclustr-icarus:1.0.1                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:2613135226b4ab1520938cf972a59300cff3a91684cd8ebe63d951826e7ef8f2 180.1 MiB linux/amd64                                                 io.cri-containerd.image=managed 
gcr.io/cassandra-operator/instaclustr-icarus@sha256:2613135226b4ab1520938cf972a59300cff3a91684cd8ebe63d951826e7ef8f2 application/vnd.docker.distribution.manifest.v2+json      sha256:2613135226b4ab1520938cf972a59300cff3a91684cd8ebe63d951826e7ef8f2 180.1 MiB linux/amd64                                                 io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-cainjector:v0.14.1                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-cainjector@sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be     application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-controller:v0.14.1                                                                     application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-controller@sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f     application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-webhook:v0.14.1                                                                        application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
quay.io/jetstack/cert-manager-webhook@sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e        application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:05762f87116f9c79ecb9f2adc9f89056e2ab174d67723a41530a6119608141a0                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:bd5325e5f18e93978eb69ff2cb798a46fff874beaedfed47ec9d682c97600c9e 8.8 MiB   linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:0e3eae99e9828146a23183a53697c0e917883415b887d2daecd51f4c1235fbf8                                              application/vnd.docker.distribution.manifest.v2+json      sha256:e6f79a159813cb01777eefa633f4905c1d4bfe091f4d40de317a506e1d10f30c 18.1 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:1fefad8f31f0a6228049ab7ccce9a22ee786ac2aa2da69311580bf1584d8a084                                              application/vnd.docker.distribution.manifest.v2+json      sha256:b600e02cee2d99bafd6bc0f174ae3cfa70c635eacd4f6ef7313da1da25f0e998 180.0 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:3f81ba5de6184af15f457e23417d0a3f95272b3ada5ac9de684e029ce224ef82                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b87bb02d46717a38ea32925930495062feeb7d67fcb113bd13cd2dfdd73686be 10.7 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:43367913425e00adda84dd9e795fd8c76eb7f1699393a3d5034c0b93c02108e6                                              application/vnd.docker.distribution.manifest.v2+json      sha256:dd1e58808947815b1cccd0847c6b360bd0c0e2097cd78a734382fa65b5b36619 13.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:4e797b3234604c31f729cb63b6128b623e2f76e629d53ccb84d899de4e73f759                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:e70c936deab8efed89db66f04847fec137dbb81d5b456e8068b6e71cb770f6c0 12.8 MiB  linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x io.cri-containerd.image=managed 
sha256:5467234daea962117d54d1ede86349ad6cdb9eb21c114b983bd8c4a76abd1c4e                                              application/vnd.docker.distribution.manifest.v2+json      sha256:c87b1c07fb53b1a82d24b436e53485917876a963dc67311800109fa12fe9a63d 266.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:5a65375c17a42f0e778645354a7e30d34deea44aa9ac211bf1230080e3a56eee                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b7a76ec06f68fd9c801b72dfd283701bc7d8a8b0609277a0d570e8e6768e4ad9 82.8 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:70baa4123245857a905c7e701ec1a9c4992f196693d7f073c2bdc0094369872c                                              application/vnd.docker.distribution.manifest.v2+json      sha256:5615637474e108f2920f2b6e7d6cd3d4e8fa1e145827eb882d3ba2fae48dc6fc 142.8 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:80d8990d7b83707ebea2eb8fbe4be4d313296b46e5c733b1b49edde8f2be7cd9                                              application/vnd.docker.distribution.manifest.v2+json      sha256:38d8078181181586d1baafac5bf01d8925d96f61c77112403c9585a5c1e2967d 18.0 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:8342b4e9dba3bb6d74ac806da2917129d70460256361c342a62401c6b1c40b8b                                              application/vnd.docker.distribution.manifest.v2+json      sha256:2613135226b4ab1520938cf972a59300cff3a91684cd8ebe63d951826e7ef8f2 180.1 MiB linux/amd64                                                 io.cri-containerd.image=managed 
sha256:8509d633815112fca25be2257edaad455f98dcbc02bbcd34722e9bcc3a046eb4                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:03b97ae964166d0d5df0331e118667fda5fd0d0a313688a3502c54afd188645f 13.8 MiB  linux/amd64,linux/arm/v7,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:897ce3c5fc8ff8b1ad4fe1e74f725a0942e8e1dad0ee09096cfa7320da889a53                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a 2.6 MiB   linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:9d12f9848b99f4e5f5271b7cac02790de44ec12014762a83987cfff7f3c26106                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:0d60b97b101e432606035ab955c623604493e8956484af1cfa207753329bdf81 11.4 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:9dd718864ce61b4c0805eaf75f87b95302960e65d4857cb8b6591864394be55b                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:b85628b103169d7db52a32a48b46d8942accb7bde3709c0a4888a23d035f9f1e 10.1 MiB  linux/amd64,linux/arm,linux/arm64                           io.cri-containerd.image=managed 
sha256:aa764f7db3051ee79467eeef28fe6a5f0667ae8af7bd6ead61f9b0ae3d8f638e                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:3ba3ed48c4632f2b02671923950b30b5b7f1b556e559ce15446d1f5d648a037d 22.9 MiB  linux/amd64,linux/arm/v6,linux/arm64/v8                     io.cri-containerd.image=managed 
sha256:c44fa74ead882d6417e2736700dce8fdef2f12849d45f9f92023cf1d319a9ee4                                              application/vnd.docker.distribution.manifest.v2+json      sha256:730b765df9fe96af414da64a2b67f3a5f70b8fd13a31e5096fee4807ed802e20 32.6 MiB  linux/amd64                                                 io.cri-containerd.image=managed 
sha256:cb399bafceb40f2d6193790e5547bc7075da4497a586953578025792c8ecb468                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:02847d6cc09bfe5fa6c1f347e499984af44537f68188578d6173636618ae7a39 342.5 MiB linux/amd64                                                 io.cri-containerd.image=managed 
wagmarcel commented 3 years ago

@ismouhi thanks. I must admit, I used up all my ideas what could have gone wrong. Sorry :-( One thing: In the makefile, can you please replace:

@$(foreach image,$(CONTAINERS), \

by

$(foreach image,$(CONTAINERS), \

and do themake import-images call again? This will show us more details.

ismouhi commented 3 years ago

@wagmarcel this is the output from import-image after the changes :

printf websocket-server; docker save oisp/websocket-server:v2.0.0-beta.1 -o /tmp/websocket-server && printf " is saved" && docker cp /tmp/websocket-server k3s_agent_1_2f32dcbfa725:/tmp/websocket-server && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/websocket-server && printf ", imported\n";   printf frontend; docker save oisp/frontend:v2.0.0-beta.1 -o /tmp/frontend && printf " is saved" && docker cp /tmp/frontend k3s_agent_1_2f32dcbfa725:/tmp/frontend && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/frontend && printf ", imported\n";   printf backend; docker save oisp/backend:v2.0.0-beta.1 -o /tmp/backend && printf " is saved" && docker cp /tmp/backend k3s_agent_1_2f32dcbfa725:/tmp/backend && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/backend && printf ", imported\n";   printf streamer; docker save oisp/streamer:v2.0.0-beta.1 -o /tmp/streamer && printf " is saved" && docker cp /tmp/streamer k3s_agent_1_2f32dcbfa725:/tmp/streamer && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/streamer && printf ", imported\n";   printf kairosdb; docker save oisp/kairosdb:v2.0.0-beta.1 -o /tmp/kairosdb && printf " is saved" && docker cp /tmp/kairosdb k3s_agent_1_2f32dcbfa725:/tmp/kairosdb && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/kairosdb && printf ", imported\n";   printf mqtt-gateway; docker save oisp/mqtt-gateway:v2.0.0-beta.1 -o /tmp/mqtt-gateway && printf " is saved" && docker cp /tmp/mqtt-gateway k3s_agent_1_2f32dcbfa725:/tmp/mqtt-gateway && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/mqtt-gateway && printf ", imported\n";   printf mqtt-broker; docker save oisp/mqtt-broker:v2.0.0-beta.1 -o /tmp/mqtt-broker && printf " is saved" && docker cp /tmp/mqtt-broker k3s_agent_1_2f32dcbfa725:/tmp/mqtt-broker && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/mqtt-broker && printf ", imported\n";   printf grafana; docker save oisp/grafana:v2.0.0-beta.1 -o /tmp/grafana && printf " is saved" && docker cp /tmp/grafana k3s_agent_1_2f32dcbfa725:/tmp/grafana && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/grafana && printf ", imported\n";   printf keycloak; docker save oisp/keycloak:v2.0.0-beta.1 -o /tmp/keycloak && printf " is saved" && docker cp /tmp/keycloak k3s_agent_1_2f32dcbfa725:/tmp/keycloak && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/keycloak && printf ", imported\n";   printf services-operator; docker save oisp/services-operator:v2.0.0-beta.1 -o /tmp/services-operator && printf " is saved" && docker cp /tmp/services-operator k3s_agent_1_2f32dcbfa725:/tmp/services-operator && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/services-operator && printf ", imported\n";   printf services-server; docker save oisp/services-server:v2.0.0-beta.1 -o /tmp/services-server && printf " is saved" && docker cp /tmp/services-server k3s_agent_1_2f32dcbfa725:/tmp/services-server && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/services-server && printf ", imported\n";   printf debugger; docker save oisp/debugger:v2.0.0-beta.1 -o /tmp/debugger && printf " is saved" && docker cp /tmp/debugger k3s_agent_1_2f32dcbfa725:/tmp/debugger && printf ", copied" && docker exec -it k3s_agent_1_2f32dcbfa725 ctr image import /tmp/debugger && printf ", imported\n"; 
websocket-server is saved, copiedunpacking docker.io/oisp/websocket-server:v2.0.0-beta.1 (sha256:82d1a6dbcffd38c42fcf7e82e1406f3bdf8ef9c6fe653ba59fbacf95041b5769)...done
, imported
frontend is saved, copiedunpacking docker.io/oisp/frontend:v2.0.0-beta.1 (sha256:0578e11bd951f40689c1a69507c07e2c6665ace1d773ffded986008c340acd44)...done
, imported
backend is saved, copiedunpacking docker.io/oisp/backend:v2.0.0-beta.1 (sha256:c87de85e6d9a710488bbc4640d98412bf998bfff6aa7835fa2f486baa92d4567)...done
, imported
streamer is saved, copiedunpacking docker.io/oisp/streamer:v2.0.0-beta.1 (sha256:0f918e40174acec1edb225068ac74476d6404002d22a13b9c4cc43f3768861e4)...done
, imported
kairosdb is saved, copiedunpacking docker.io/oisp/kairosdb:v2.0.0-beta.1 (sha256:f8d568987d8214925183167f3ac1a27095f6342c4c88c4bc06276c8acedc4559)...done
, imported
mqtt-gateway is saved, copiedunpacking docker.io/oisp/mqtt-gateway:v2.0.0-beta.1 (sha256:d7d0b1bde7151c8289805235c953cff553e541757286a772a70f8f7af19082dd)...done
, imported
mqtt-broker is saved, copiedunpacking docker.io/oisp/mqtt-broker:v2.0.0-beta.1 (sha256:ccf77db7c11de288e6c94d359c3ce3ad669b7667e9971b88b42734b4c87121f9)...done
, imported
grafana is saved, copiedunpacking docker.io/oisp/grafana:v2.0.0-beta.1 (sha256:2a01443720d9b9448df3c8012cdf3df67ea3daa3759eade6d79fd04192a796ab)...done
, imported
keycloak is saved, copiedunpacking docker.io/oisp/keycloak:v2.0.0-beta.1 (sha256:262f1436914405f236a846dae3fca2f5892276c8774589f76a682649960db1a0)...done
, imported
services-operator is saved, copiedunpacking docker.io/oisp/services-operator:v2.0.0-beta.1 (sha256:4f08f6b105fd399ab923a76361598cb54601617df2654905eba9f0afefaef24d)...done
, imported
services-server is saved, copiedunpacking docker.io/oisp/services-server:v2.0.0-beta.1 (sha256:b8cc7c7ee7a742d2fe544ad78042c38aa2216ca9a157189971abfe38cfe84a2a)...done
, imported
debugger is saved, copiedunpacking docker.io/oisp/debugger:v2.0.0-beta.1 (sha256:86d2f9e9000812f3111bc9652bf35f5042da6b011155a5d602350f4f213e5ce1)...done
, imported
gcr.io/cassandra-operator/cassandra-3.11.6:v6.4.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/cassandra-operator:v6.7.0, pulled, saved, copied, imported
gcr.io/cassandra-operator/instaclustr-icarus:1.0.1, pulled, saved, copied, imported
confluentinc/cp-kafka:5.0.1, pulled, saved, copied, imported

i will deploy and send you the result thank you for your help

ismouhi commented 3 years ago

@wagmarcel earlier this day , i created a new VM with larger Storage and launched the install processe on this VM i did replaced docker exec -it $(K3S_NODE) ctr image import /tmp/$(image) >> /dev/null && printf ", imported\n"; \ by docker exec -it $(K3S_NODE) ctr image import /tmp/$(image) && printf ", imported\n"; \ and started the deploy-oisp-test and it passed , but at make test i got this error :

  150 passing (7m)
  8 pending
  3 failing

  1) Adding user and posting email ...

       Shall add a new user and post email:
     Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.

  2) change password and delete receiver ... 

       Shall change password:
     Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.

  3) change password and delete receiver ... 

       Shall delete a user:
     Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.

npm ERR! Test failed.  See above for more details.
Makefile:48: recipe for target 'test' failed
make: *** [test] Error 1
command terminated with exit code 2
Makefile:305: recipe for target 'test' failed
make: *** [test] Error 2

in the old VM i got a new error message

namespace/oisp created
"codecentric" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "codecentric" chart repository
...Unable to get an update from the "incubator" chart repository (https://kubernetes-charts-incubator.storage.googleapis.com):
    Get https://kubernetes-charts-incubator.storage.googleapis.com/index.yaml: dial tcp 216.58.211.48:443: i/o timeout
Update Complete. ⎈Happy Helming!⎈
Saving 3 charts
Downloading keycloak from repo https://codecentric.github.io/helm-charts
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Save error occurred:  could not download https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: Get https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: dial tcp 216.58.211.48:443: i/o timeout
Deleting newly downloaded charts, restoring pre-update state
Error: could not download https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: Get https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: dial tcp 216.58.211.48:443: i/o timeout
Makefile:123: recipe for target 'deploy-oisp' failed
make: *** [deploy-oisp] Error 1
wagmarcel commented 3 years ago

@ismouhi download problems happen from time to time - sometimes when trying to often, you hit a rate limit, or something on the other side is down. But in the old VM there is definitely something unusual. Congratulations that you could run it in a new VM! The test timeouts are interesting - you said you have 4 cores? I would not expect this. Can you undeploy, deploy and test again? In case you see the problems again, can you please provide me the logs of the frontend pod?

ismouhi commented 3 years ago

Hi @wagmarcel the new VM has 8 cores i tried make test again and the output is the same this is the output of FRONTEND_POD=$(kubectl -n oisp get pods| grep frontend| cut -f 1 -d " ") kubectl -n oisp describe pod $FRONTEND_POD

Name:         frontend-7b7947665d-9gh2h
Namespace:    oisp
Priority:     0
Node:         d2285683d5c6/172.18.0.3
Start Time:   Wed, 09 Dec 2020 14:41:42 +0100
Labels:       app=frontend
              pod-template-hash=7b7947665d
Annotations:  <none>
Status:       Running
IP:           10.42.0.181
IPs:
  IP:           10.42.0.181
Controlled By:  ReplicaSet/frontend-7b7947665d
Containers:
  frontend:
    Container ID:  containerd://d86fbe1b44f819ecd90683ceb3d46185d8c9f18c0b7063d379abbfb0b03d0ed2
    Image:         oisp/frontend:v2.0.0-beta.1
    Image ID:      sha256:7556b09fd940e09e35039f96bc15e0563219dc21d30ae82d63949cf339a3b184
    Port:          4001/TCP
    Host Port:     0/TCP
    Args:
      ./wait-for-it.sh
      postgres:5432
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      redis:6379
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      oisp-kafka-headless:9092
      -t
      300000
      -s
      --
      ./wait-for-it.sh
      keycloak-http:4080
      -t
      300000
      -s
      --
      ./scripts/docker-start.sh
      --disable-rate-limits
    State:          Running
      Started:      Wed, 09 Dec 2020 14:51:53 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 09 Dec 2020 14:47:36 +0100
      Finished:     Wed, 09 Dec 2020 14:51:51 +0100
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:      50m
    Liveness:   http-get http://:4001/v1/api/health delay=200s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:4001/v1/api/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      NODE_ENV:                      local
      OISP_FRONTEND_CONFIG:          <set to the key 'frontend' of config map 'oisp-config'>           Optional: false
      OISP_POSTGRES_CONFIG:          <set to the key 'postgres' of config map 'oisp-config'>           Optional: false
      OISP_REDIS_CONFIG:             <set to the key 'redis' of config map 'oisp-config'>              Optional: false
      OISP_KAFKA_CONFIG:             <set to the key 'kafka' of config map 'oisp-config'>              Optional: false
      OISP_SMTP_CONFIG:              <set to the key 'smtp' of config map 'oisp-config'>               Optional: false
      OISP_FRONTENDSECURITY_CONFIG:  <set to the key 'frontend-security' of config map 'oisp-config'>  Optional: false
      OISP_KEYCLOAK_CONFIG:          <set to the key 'keycloak' of config map 'oisp-config'>           Optional: false
      OISP_GATEWAY_CONFIG:           <set to the key 'gateway' of config map 'oisp-config'>            Optional: false
      OISP_BACKENDHOST_CONFIG:       <set to the key 'backend-host' of config map 'oisp-config'>       Optional: false
      OISP_WEBSOCKETUSER_CONFIG:     <set to the key 'websocket-user' of config map 'oisp-config'>     Optional: false
      OISP_RULEENGINE_CONFIG:        <set to the key 'rule-engine' of config map 'oisp-config'>        Optional: false
      OISP_MAIL_CONFIG:              <set to the key 'mail' of config map 'oisp-config'>               Optional: false
      OISP_GRAFANA_CONFIG:           <set to the key 'grafana' of config map 'oisp-config'>            Optional: false
    Mounts:
      /app/keys from jwt-keys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x2lbp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  jwt-keys:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oisp-secrets
    Optional:    false
  default-token-x2lbp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x2lbp
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>
wagmarcel commented 3 years ago

@ismouhi ok thanks. I also would need the logs:

FRONTEND_POD=$(kubectl -n oisp get pods| grep frontend| cut -f 1 -d " "); kubectl -n oisp logs $FRONTEND_POD

Another thing: Do you see the dashboard, can you create a testuser etc? As described here: https://platform-launcher.readthedocs.io/en/latest/usage/quickstart.html#access-and-login-to-local-platform

ismouhi commented 3 years ago

hi @wagmarcel this is the logs logs.txt i can see the dashboard and i created a test user

ismouhi commented 3 years ago

Hi @wagmarcel i restarted the machine and i undeployed the platform when i tried to deploy it again it show this error

    --set tag=v2.0.0-beta.1 \
    --set keycloak.keycloak.image.tag=v2.0.0-beta.1 \

The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Makefile:123: recipe for target 'deploy-oisp' failed
make[1]: *** [deploy-oisp] Error 1
make[1]: Leaving directory '/home/iits/platform-launcher'
Makefile:102: recipe for target 'deploy-oisp-test' failed
make: *** [deploy-oisp-test] Error 2
wagmarcel commented 3 years ago

@ismouhi when you restart the machine the k3s services does not run any longer. To restart the k3s cluster, there is a script in util folder.

cd util
bash ./restart-cluster.sh
ismouhi commented 3 years ago

@wagmarcel restart-cluster.sh keep throwing this message in a loop :

Waiting for traefik to come up
Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
.Unable to connect to the server: x509: certificate signed by unknown authority
wagmarcel commented 3 years ago

@ismouhi are you by chance behind a proxy? Do you have the KUBECONFIG variable se explicitly?

ismouhi commented 3 years ago

restart works now i run it with sudo i will deploy the platform now

ismouhi commented 3 years ago

hi @wagmarcel is there any news about the errors from make test is there any option to create a system admin user that has the access to all the accounts in the platform also is there an option to create an api token that last more than 24h ? thanks

wagmarcel commented 3 years ago

hi @ismouhi I have no update on the make test. From the logs I could not conclude what went wrong. we might need to switch on debugging. On longer api tokens: The usual way is to refresh it with the auth API when it is expired - longer tokens validity is a security problem as you know. However, there is an inofficial way to create longer tokens:

FRONTEND_POD=$(kubectl -n oisp get pods| grep frontend| cut -d " " -f 1)
kubectl -n oisp exec $FRONTEND_POD -- node admin getUserToken <email> <expiration in minutes>

To your system admin user question: Yes, there is a system user - however, it only allows you to access the REST API with system admin rights, so you have to know the account ids (e.g. what will NOT work is to login with the system user to the GUI and see all accounts, it is more meant for the cli) Adding this system user roles has to be done over the keycloak service. To access your local keycloak instance you need to start the local services forwarding with:

sudo -E KUBECONFIG=~/k3s/kubeconfig.yaml kubefwd -n oisp svc

You can then login to http://keycloak-http:4080/keycloak/ and go to the admin console. To retrieve the password:

kubectl -n oisp get cm/oisp-config -o=jsonpath='{.data.keycloak-admin}'| jq .password

Select your user, go to the "role mapping" tab, select on "Client roles" the "oisp-frontend" and assign the "sysadmin" role.

ismouhi commented 3 years ago

Hi @wagmarcel the vm that i run oisp on it restarted again , and i faced the same issue when i tried to redeploy the platform

so i tried to install the platform on a vps , and i followed the same instructions i got this error on deploy-oisp-test

Saving 3 charts
Downloading keycloak from repo https://codecentric.github.io/helm-charts
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Deleting outdated charts
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>

Error: failed post-install: timed out waiting for the condition
Makefile:123: recipe for target 'deploy-oisp' failed
make[1]: *** [deploy-oisp] Error 1
make[1] : on quitte le répertoire « /home/yanzi/platform-launcher »
Makefile:102: recipe for target 'deploy-oisp-test' failed
make: *** [deploy-oisp-test] Error 2
wagmarcel commented 3 years ago

@ismouhi - when you list the pods with

kubectl -n oisp get pods

do you see any suspicious image pull backoff messages in the pods statuses? In general, when you restart everything, then the non oisp specific images have to be pulled again (assuming that you imported the oisp specific images). So, dependent on your network connection, it can timeout the first time you try to deploy (10m is default). But if you don't see any image pulling issues, then undeploying and redeploying should solve it, because then some of the general images are already cached in containerd and it will take less time next try.

ismouhi commented 3 years ago

Hi @wagmarcel there is a 403 forbidden when trying to download kafka

namespace/oisp created
"codecentric" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "codecentric" chart repository
...Unable to get an update from the "incubator" chart repository (https://kubernetes-charts-incubator.storage.googleapis.com):
        failed to fetch https://kubernetes-charts-incubator.storage.googleapis.com/index.yaml : 403 Forbidden
Update Complete. ⎈Happy Helming!⎈
Saving 3 charts
Downloading keycloak from repo https://codecentric.github.io/helm-charts
Downloading kafka from repo https://kubernetes-charts-incubator.storage.googleapis.com
Save error occurred:  could not download https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: failed to fetch https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz : 403 Forbidden
Deleting newly downloaded charts, restoring pre-update state
Error: could not download https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz: failed to fetch https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.20.8.tgz : 403 Forbidden
Makefile:123: recipe for target 'deploy-oisp' failed
make[1]: *** [deploy-oisp] Error 1
make[1] : on quitte le répertoire « /home/yanzi/platform-launcher »
Makefile:102: recipe for target 'deploy-oisp-test' failed
make: *** [deploy-oisp-test] Error 2
wagmarcel commented 3 years ago

@ismouhi thanks for pointing us to that - that must have happened very recently - in a not so silent night :-) The official helm chart repos migrated and old urls retired. We still have the old urls in the scripts unfortunately. You can fix it manually by typing:

helm repo add incubator https://charts.helm.sh/incubator
helm repo add stable https://charts.helm.sh/stable

then the deployment should work. When you restart the VM you might need to repeat that. I am preparing a PR to fix that soon. Please let me know whether this worked.

ismouhi commented 3 years ago

Hello @wagmarcel i tried to import-images again and it failed

Unable to connect to the server: net/http: TLS handshake timeout
Unable to connect to the server: net/http: TLS handshake timeout
Unable to connect to the server: net/http: TLS handshake timeout
websocket-server is saved, copiedunpacking docker.io/oisp/websocket-server:v2.0.                                                                                                                                                             0-beta.1 (sha256:ff201e017c21867771219719493d3187407b66f44ec03d8b23f4fa5a10d8af0                                                                                                                                                             4)...done
, imported
frontend is saved, copiedunpacking docker.io/oisp/frontend:v2.0.0-beta.1 (sha256                                                                                                                                                             :a777da977133f5462b6bc7e318f2d5a4d84af593b626c7578b17b3c56b30a3e6)...done
, imported
backend is saved, copiedunpacking docker.io/oisp/backend:v2.0.0-beta.1 (sha256:f                                                                                                                                                             225cbcfdcee845ee33c05404eef8a1c56b1e8fefdac39f7922925e977c163d9)...done
, imported
streamer is saved, copiedunpacking docker.io/oisp/streamer:v2.0.0-beta.1 (sha256                                                                                                                                                             :bbbafe872e1555b208df410aec43380eb1213f2ac6b6b846bb735cb9a6eb9475)...done
, imported
kairosdb is saved, copiedunpacking docker.io/oisp/kairosdb:v2.0.0-beta.1 (sha256                                                                                                                                                             :3eccc06bf84d5db31a91c4f472e88ee24544863c826b62a0451b5c0196d0dfe1)...ctr: conten                                                                                                                                                             t digest sha256:211b4123d543f2cdb4d70b2084f1058a2e50e03ca0d9b406908ec2e5c38e9c4d                                                                                                                                                             : not found
mqtt-gateway is saved, copiedunpacking docker.io/oisp/mqtt-gateway:v2.0.0-beta.1                                                                                                                                                              (sha256:e643bee77dd882f66fa2616023230533c8301b861c00c4a97068fce3f4f3b1f8)...don                                                                                                                                                             e
, imported
mqtt-broker is saved, copiedunpacking docker.io/oisp/mqtt-broker:v2.0.0-beta.1 (                                                                                                                                                             sha256:cb14d3e5fc8783fca36737b3d295e352061e3bdae69a33950e7653e1dc61fbb3)...ctr:                                                                                                                                                              content digest sha256:7bbf062040dbb9d550ddbca6c1c2dfe65d3a8ed9aa454c039d08413ec1                                                                                                                                                             395ed6: not found
grafana is saved, copiedunpacking docker.io/oisp/grafana:v2.0.0-beta.1 (sha256:9                                                                                                                                                             752d9eb54db87bc55edc57546c2edb545f5fd4a4b111d6fbaa5462ee4583662)...done
, imported
keycloak is saved, copiedunpacking docker.io/oisp/keycloak:v2.0.0-beta.1 (sha256                                                                                                                                                             :502827130613e3929c24e5046f0505cb7a421ad6502864f8957e88d3070163d7)...ctr: conten                                                                                                                                                             t digest sha256:502827130613e3929c24e5046f0505cb7a421ad6502864f8957e88d3070163d7                                                                                                                                                             : not found
services-operator is saved, copiedunpacking docker.io/oisp/services-operator:v2.                                                                                                                                                             0.0-beta.1 (sha256:8747f0361767fe9f49ee6453f59aabd4091d2bb690db0557f070e558f8c7e                                                                                                                                                             ab0)...ctr: content digest sha256:40645c5a14243a2a37a288ab54c268987601c414bbe009                                                                                                                                                             f54767a6b312e7932d: not found
services-server is saved, copiedunpacking docker.io/oisp/services-server:v2.0.0-                                                                                                                                                             beta.1 (sha256:269b4c20df11c648ddb5819ff40f5491cc8bdd4f8102995062aa2a8ba06f491c)                                                                                                                                                             ...done
, imported
debugger is saved, copiedunpacking docker.io/oisp/debugger:v2.0.0-beta.1 (sha256                                                                                                                                                             :e3519729db140070a2e98cbc58b5f81f3a7d233f6cbaee9892cd7aadef52f3a2)...ctr: conten                                                                                                                                                             t digest sha256:96a3b84d0234846399828986b2311510b244240d36dec2f4b794a904c3d7b87d                                                                                                                                                             : not found
Makefile:241: recipe for target 'import-images' failed
make: *** [import-images] Error 1

the deploy-oisp-test still throw the same error message Error: failed post-install: timed out waiting for the condition

wagmarcel commented 3 years ago

Hi @ismouhi, can you check wether the k3s cluster is still up? e.g. by

kubectl get nodes

Also, it could mean that there is a mismatch of kubeconfigs, e.g. maybe you need call it with sudo -E in your setup?