vhive-serverless / vHive

vHive: Open-source framework for serverless experimentation
MIT License
273 stars 86 forks source link

Deployment of Functions in vHive failing #539

Closed aditya2803 closed 2 years ago

aditya2803 commented 2 years ago

Description

I am trying to set up vHive on a single node cluster, and get it working by deploying and then invoking the functions, as described in the guide here. I am able to follow through the steps manually, and all the kubernetes pods are running as desired. However, when deploying functions using this link, I ran into some errors.

System Configuration

lscpu output:

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          256
On-line CPU(s) list:             0-255
Thread(s) per core:              2
Core(s) per socket:              64
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           1
Model name:                      AMD Eng Sample: 100-000000314-02_30/16_N
Stepping:                        0
Frequency boost:                 enabled
CPU MHz:                         1600.000
CPU max MHz:                     3000.0000
CPU min MHz:                     1200.0000
BogoMIPS:                        3193.90
Virtualization:                  AMD-V
L1d cache:                       4 MiB
L1i cache:                       4 MiB
L2 cache:                        64 MiB
L3 cache:                        512 MiB
NUMA node0 CPU(s):               0-63,128-191
NUMA node1 CPU(s):               64-127,192-255
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_
                                 opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 f
                                 ma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a
                                  misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l
                                 3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall erms xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm
                                 _local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid de
                                 codeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov suc
                                 cor smca fsrm

cat /etc/os-release output:

NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Logs

vHive logs:

time="2022-05-19T12:15:03.479476427Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to start the VM: [PUT /actions][400] createSyncActionBadRequest  &{FaultMessage:Internal error while starting microVM: VcpuConfigure(CpuId(InvalidVendor))}" image="ghcr.io/ease-lab/helloworld:var_workload" vmID=1
time="2022-05-19T12:15:03.479560085Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to start the VM: [PUT /actions][400] createSyncActionBadRequest  &{FaultMessage:Internal error while starting microVM: VcpuConfigure(CpuId(InvalidVendor))}"
time="2022-05-19T12:15:03.482077842Z" level=error msg="VM config for pod d021d0b8ad35ac3cc8d9a0f8202e91dbc2c09081413cf2352a27717df00ed033 does not exist"
time="2022-05-19T12:15:03.482101657Z" level=error error="VM config for pod does not exist"

(I get the same issue as #476 initially. I then used the solution proposed on the ticket. Above logs are post application of the solution.)

Notes There is a similar issue mentioned here. This seems to be a firecracker-containerd issue for non-Intel vendors, which they seem to have fixed later (as per the issue). I am not sure whether the firecracker-containerd binary used in vHive is the latest one. When I clone the latest firecracker-containerd repo, install it, and replace the /vhive/bin/firecracker-containerd binary with the one I built, the vHive error log gets reduced to:

time="2022-05-19T06:31:18.406741266Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to build VM configuration: no such file or directory"
time="2022-05-19T06:31:18.409782917Z" level=error msg="VM config for pod 84c5ce4eb538a061c3f75497a2b9f8688dc4cbfa351478a81691b05e4e59ff43 does not exist"
time="2022-05-19T06:31:18.409806492Z" level=error error="VM config for pod does not exist"
time="2022-05-19T06:31:36.204002382Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-19T06:31:36.204047106Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-19T06:31:36.350628233Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = failed to create VM: failed to build VM configuration: no such file or directory" image="vhiveease/rnn_serving:var_workload" vmID=263

I have also gone through #525 and have access to /dev/kvm. Also, I am running on a bare-metal x86_64 amd server running Ubuntu 20.04.

Expected Behavior Functions should be deployed normally.

Steps to reproduce Simply follow the start-up guide provided to set up an one-node cluster & then run the deployer.

ustiugov commented 2 years ago

hi @aditya2803, indeed vHive didn't have the firecracker-containerd folks' fix. We are about to merge a new version vHive/Firecracker snapshots with this #465. The code in that PR does have more recent firecracker-containerd binary, which is already tested but the docs are not updated. You can use that branch before the PR is merged

aditya2803 commented 2 years ago

Hi @ustiugov thanks for your suggestion. I cloned the PR branch and rebuilt the stack. However, the functions are still not getting deployed properly. Below is the error I get:

WARN[0600] Failed to deploy function pyaes-1, configs/knative_workloads/pyaes.yaml: exit status 1
Creating service 'pyaes-1' in namespace 'default':

  0.019s The Route is still working to reflect the latest desired specification.
  0.097s Configuration "pyaes-1" is waiting for a Revision to become ready.
Error: timeout: service 'pyaes-1' not ready after 600 seconds
Run 'kn --help' for usage

INFO[0600] Deployed function pyaes-1
WARN[0600] Failed to deploy function pyaes-0, configs/knative_workloads/pyaes.yaml: exit status 1
Creating service 'pyaes-0' in namespace 'default':

  0.029s The Route is still working to reflect the latest desired specification.
  0.123s Configuration "pyaes-0" is waiting for a Revision to become ready.
Error: timeout: service 'pyaes-0' not ready after 600 seconds
Run 'kn --help' for usage

INFO[0600] Deployed function pyaes-0
WARN[0600] Failed to deploy function rnn-serving-1, configs/knative_workloads/rnn_serving.yaml: exit status 1
Creating service 'rnn-serving-1' in namespace 'default':

  0.088s The Route is still working to reflect the latest desired specification.
  0.163s ...
  0.179s Configuration "rnn-serving-1" is waiting for a Revision to become ready.
Error: timeout: service 'rnn-serving-1' not ready after 600 seconds
Run 'kn --help' for usage

WARN[0600] Failed to deploy function rnn-serving-0, configs/knative_workloads/rnn_serving.yaml: exit status 1
Creating service 'rnn-serving-0' in namespace 'default':

  0.126s The Route is still working to reflect the latest desired specification.
  0.169s ...
  0.185s Configuration "rnn-serving-0" is waiting for a Revision to become ready.
Error: timeout: service 'rnn-serving-0' not ready after 600 seconds
Run 'kn --help' for usage

WARN[0600] Failed to deploy function helloworld-0, configs/knative_workloads/helloworld.yaml: exit status 1
Creating service 'helloworld-0' in namespace 'default':

  0.086s The Route is still working to reflect the latest desired specification.
  0.161s ...
  0.183s Configuration "helloworld-0" is waiting for a Revision to become ready.
Error: timeout: service 'helloworld-0' not ready after 600 seconds
Run 'kn --help' for usage

INFO[0600] Deployed function helloworld-0
INFO[0600] Deployed function rnn-serving-0
INFO[0600] Deployed function rnn-serving-1
WARN[1200] Failed to deploy function rnn-serving-2, configs/knative_workloads/rnn_serving.yaml: exit status 1
Creating service 'rnn-serving-2' in namespace 'default':

  0.025s The Route is still working to reflect the latest desired specification.
  0.059s ...
  0.087s Configuration "rnn-serving-2" is waiting for a Revision to become ready.
Error: timeout: service 'rnn-serving-2' not ready after 600 seconds
Run 'kn --help' for usage

INFO[1200] Deployed function rnn-serving-2
INFO[1200] Deployment finished

Output of kubectl describe revision/helloworld-0-00001

Name:         helloworld-0-00001
Namespace:    default
Labels:       serving.knative.dev/configuration=helloworld-0
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationUID=700a477f-63f1-445f-bb64-291d3b62016b
              serving.knative.dev/routingState=active
              serving.knative.dev/service=helloworld-0
              serving.knative.dev/serviceUID=aa954d1d-975f-416f-901c-1e68572c26e4
Annotations:  autoscaling.knative.dev/target: 1
              serving.knative.dev/creator: kubernetes-admin
              serving.knative.dev/routes: helloworld-0
              serving.knative.dev/routingStateModified: 2022-05-20T17:45:45Z
API Version:  serving.knative.dev/v1
Kind:         Revision
Metadata:
  Creation Timestamp:  2022-05-20T17:45:45Z
  Generation:          1
  Managed Fields:
    API Version:  serving.knative.dev/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:autoscaling.knative.dev/target:
          f:serving.knative.dev/creator:
          f:serving.knative.dev/routes:
          f:serving.knative.dev/routingStateModified:
        f:labels:
          .:
          f:serving.knative.dev/configuration:
          f:serving.knative.dev/configurationGeneration:
          f:serving.knative.dev/configurationUID:
          f:serving.knative.dev/routingState:
          f:serving.knative.dev/service:
          f:serving.knative.dev/serviceUID:
        f:ownerReferences:
          .:
          k:{"uid":"700a477f-63f1-445f-bb64-291d3b62016b"}:
      f:spec:
        .:
        f:containerConcurrency:
        f:containers:
        f:enableServiceLinks:
        f:timeoutSeconds:
    Manager:      Go-http-client
    Operation:    Update
    Time:         2022-05-20T17:45:45Z
    API Version:  serving.knative.dev/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:actualReplicas:
        f:conditions:
        f:containerStatuses:
        f:observedGeneration:
    Manager:      Go-http-client
    Operation:    Update
    Subresource:  status
    Time:         2022-05-20T17:45:45Z
  Owner References:
    API Version:           serving.knative.dev/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Configuration
    Name:                  helloworld-0
    UID:                   700a477f-63f1-445f-bb64-291d3b62016b
  Resource Version:        6933
  UID:                     d547b289-d825-4c9c-9e09-dd1398b6cc12
Spec:
  Container Concurrency:  0
  Containers:
    Env:
      Name:   GUEST_PORT
      Value:  50051
      Name:   GUEST_IMAGE
      Value:  ghcr.io/ease-lab/helloworld:var_workload
    Image:    crccheck/hello-world:latest
    Name:     user-container
    Ports:
      Container Port:  50051
      Name:            h2c
      Protocol:        TCP
    Readiness Probe:
      Success Threshold:  1
      Tcp Socket:
        Port:  0
    Resources:
  Enable Service Links:  false
  Timeout Seconds:       300
Status:
  Actual Replicas:  0
  Conditions:
    Last Transition Time:  2022-05-20T17:56:15Z
    Message:               The target is not receiving traffic.
    Reason:                NoTraffic
    Severity:              Info
    Status:                False
    Type:                  Active
    Last Transition Time:  2022-05-20T17:45:45Z
    Reason:                Deploying
    Status:                Unknown
    Type:                  ContainerHealthy
    Last Transition Time:  2022-05-20T17:56:15Z
    Message:               Initial scale was never achieved
    Reason:                ProgressDeadlineExceeded
    Status:                False
    Type:                  Ready
    Last Transition Time:  2022-05-20T17:56:15Z
    Message:               Initial scale was never achieved
    Reason:                ProgressDeadlineExceeded
    Status:                False
    Type:                  ResourcesAvailable
  Container Statuses:
    Name:               user-container
  Observed Generation:  1
Events:
  Type     Reason         Age   From                 Message
  ----     ------         ----  ----                 -------
  Warning  InternalError  34m   revision-controller  failed to update deployment "helloworld-0-00001-deployment": Operation cannot be fulfilled on deployments.apps "helloworld-0-00001-deployment": the object has been modified; please apply your changes to the latest version and try again

Note that I am using the #481 fix in my local code, as suggested by you.

Another observation is that running the script with the 'stock-only' option results in proper deployment of the functions. It is with firecracker-containerd (default) that the issue comes up.

Services in case of using stock-only:

NAME            URL                                                   LATEST                AGE     CONDITIONS   READY   REASON
helloworld-0    http://helloworld-0.default.192.168.1.240.sslip.io    helloworld-0-00001    7m40s   3 OK / 3     True
pyaes-0         http://pyaes-0.default.192.168.1.240.sslip.io         pyaes-0-00001         7m40s   3 OK / 3     True
pyaes-1         http://pyaes-1.default.192.168.1.240.sslip.io         pyaes-1-00001         7m40s   3 OK / 3     True
rnn-serving-0   http://rnn-serving-0.default.192.168.1.240.sslip.io   rnn-serving-0-00001   7m40s   3 OK / 3     True
rnn-serving-1   http://rnn-serving-1.default.192.168.1.240.sslip.io   rnn-serving-1-00001   7m40s   3 OK / 3     True
rnn-serving-2   http://rnn-serving-2.default.192.168.1.240.sslip.io   rnn-serving-2-00001   7m28s   3 OK / 3     True
ustiugov commented 2 years ago

@aditya2803 I cannot say much without lower-level logs in the firecracker setup (vHive, containerd, firecracker-containerd). The vhive CRI test worked in that branch. Try deploying a new cluster on a fresh node

also, the YAML of the workloads do not suit the stock-only setup. You need to use YAML files in a conventional Knative format (you can take them from their website).

aditya2803 commented 2 years ago

@ustiugov Here are the logs:

vhive.stdout logs

time="2022-05-21T12:21:48.816889649Z" level=info msg="Creating containerd client"
time="2022-05-21T12:21:48.817591298Z" level=info msg="Created containerd client"
time="2022-05-21T12:21:48.817639598Z" level=info msg="Creating firecracker client"
time="2022-05-21T12:21:48.817793247Z" level=info msg="Created firecracker client"
time="2022-05-21T12:21:48.825213285Z" level=info msg="Creating image manager"
time="2022-05-21T12:21:48.829549076Z" level=info msg="Registering bridges for tap manager"
time="2022-05-21T12:21:48.831582308Z" level=info msg="Listening on port:3334"
time="2022-05-21T12:21:48.831614107Z" level=info msg="Listening on port:3333"
time="2022-05-21T12:22:48.816984934Z" level=info msg="HEARTBEAT: number of active VMs: 0"
time="2022-05-21T12:22:48.832103671Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-21T12:23:48.816425650Z" level=info msg="HEARTBEAT: number of active VMs: 0"
time="2022-05-21T12:23:48.831493369Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-21T12:24:47.880301696Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-21T12:24:47.880342182Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-21T12:24:47.881345044Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-21T12:24:47.881372977Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-21T12:24:48.816480060Z" level=info msg="HEARTBEAT: number of active VMs: 5"
time="2022-05-21T12:24:48.831606453Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-21T12:24:51.338213803Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-21T12:24:51.338253999Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-21T12:25:11.326207014Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-21T12:25:11.326246148Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-21T12:25:11.327834634Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you specify the right host or port?\n\n"
time="2022-05-21T12:25:11.327854311Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-21T12:25:48.816850231Z" level=info msg="HEARTBEAT: number of active VMs: 5"
time="2022-05-21T12:25:48.831974752Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-21T12:26:48.817059802Z" level=info msg="HEARTBEAT: number of active VMs: 5"
time="2022-05-21T12:26:48.832181982Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-21T12:27:48.817262702Z" level=info msg="HEARTBEAT: number of active VMs: 5"
time="2022-05-21T12:27:48.832380083Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="20

containerd.stderr logs

time="2022-05-21T12:21:18.695175301Z" level=info msg="starting containerd" revision=de8046a5501db9e0e478e1c10cbcfb21af4c6b2d version=v1.6.2
time="2022-05-21T12:21:18.708869114Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2022-05-21T12:21:18.709312206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711055793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.16.0-rc1inuma+\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711121015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711370815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711395631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711415078Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2022-05-21T12:21:18.711429785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711461485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711697609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711862709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:18.711885732Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-05-21T12:21:18.711908104Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2022-05-21T12:21:18.711926228Z" level=info msg="metadata content store policy set" policy=shared
time="2022-05-21T12:21:28.712415211Z" level=warning msg="waiting for response from boltdb open" plugin=bolt

firecracker.stderr logs

time="2022-05-21T12:21:29Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
time="2022-05-21T12:21:29.745823459Z" level=info msg="starting containerd" revision=19c96c059d7a95e8eb7f27b4e2847c4a84898698 version=1.5.5+unknown
time="2022-05-21T12:21:29.763711658Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2022-05-21T12:21:29.763799332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:29.763855197Z" level=info msg="initializing pool device \"fc-dev-thinpool\""
time="2022-05-21T12:21:29.765041918Z" level=info msg="using dmsetup:\nLibrary version:   1.02.167 (2019-11-30)\nDriver version:    4.45.0"
time="2022-05-21T12:21:29.767577192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-05-21T12:21:29.767755988Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-05-21T12:21:29.767802526Z" level=info msg="metadata content store policy set" policy=shared
time="2022-05-21T12:21:29.768630692Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2022-05-21T12:21:29.768653124Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2022-05-21T12:21:29.768685215Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768703098Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768714820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768725631Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768737894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768749065Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768762360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768773090Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.768783509Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2022-05-21T12:21:29.768854944Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2022-05-21T12:21:29.768914405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2022-05-21T12:21:29.769205653Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.769224659Z" level=info msg="loading plugin \"io.containerd.service.v1.fc-control\"..." type=io.containerd.service.v1
time="2022-05-21T12:21:29.769236962Z" level=debug msg="initializing fc-control plugin (root: \"/var/lib/firecracker-containerd/containerd/io.containerd.service.v1.fc-control\")"
time="2022-05-21T12:21:29.787087539Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2022-05-21T12:21:29.787154725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787173691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787190302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787206202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787221761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787238152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787254453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787273940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787289529Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2022-05-21T12:21:29.787350123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787370240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787386521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787401710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.fc-control-service\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787419032Z" level=debug msg="initializing fc-control-service plugin"
time="2022-05-21T12:21:29.787436225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2022-05-21T12:21:29.787618237Z" level=info msg=serving... address=/run/firecracker-containerd/containerd.sock.ttrpc
time="2022-05-21T12:21:29.787663662Z" level=info msg=serving... address=/run/firecracker-containerd/containerd.sock
time="2022-05-21T12:21:29.787677859Z" level=debug msg="sd notification" error="<nil>" notified=false state="READY=1"
time="2022-05-21T12:21:29.787692446Z" level=info msg="containerd successfully booted in 0.042626s"
time="2022-05-21T12:21:29.869377712Z" level=debug msg="garbage collected" d="453.583µs"
time="2022-05-21T12:24:38.435253405Z" level=debug msg="(*service).Write started" expected="sha256:6a11e6dbd88b1ce1ebb284c769b52e3fdb66a0f37b392bded5612045ff2cae61" ref="manifest-sha256:6a11e6dbd88b1ce1ebb284c769b52e3fdb66a0f37b392bded5612045ff2cae61" total=1996
time="2022-05-21T12:24:38.732027372Z" level=debug msg="(*service).Write started" expected="sha256:8a5cab1e2faec39c2e1215778ed65d63584e279a82685d77a447c7c7d36a4b17" ref="config-sha256:8a5cab1e2faec39c2e1215778ed65d63584e279a82685d77a447c7c7d36a4b17" total=8868
time="2022-05-21T12:24:39.415921874Z" level=debug msg="stat snapshot" key="sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215"
time="2022-05-21T12:24:39.416202714Z" level=debug msg="prepare snapshot" key="extract-416064543-vxuX sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215" parent=
time="2022-05-21T12:24:39.416536273Z" level=debug msg=prepare key="firecracker-containerd/1/extract-416064543-vxuX sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215" parent=
time="2022-05-21T12:24:39.416957517Z" level=debug msg="creating new thin device 'fc-dev-thinpool-snap-1'"
time="2022-05-21T12:24:39.485662564Z" level=debug msg="mkfs.ext4 -E nodiscard,lazy_itable_init=0,lazy_journal_init=0 /dev/mapper/fc-dev-thinpool-snap-1"
time="2022-05-21T12:24:39.749567331Z" level=debug msg="(*service).Write started" expected="sha256:75d39d67fbb3ca85eb89ece0b38e24ab7dadb2fccf9576a00cd87588aad7c460" ref="manifest-sha256:75d39d67fbb3ca85eb89ece0b38e24ab7dadb2fccf9576a00cd87588aad7c460" total=1998
time="2022-05-21T12:24:40.366557876Z" level=debug msg="mkfs:\nmke2fs 1.45.5 (07-Jan-2020)\nCreating filesystem with 2621440 4k blocks and 655360 inodes\nFilesystem UUID: 2874b42c-f116-4a8c-98bd-e6fb6a227caa\nSuperblock backups stored on blocks: \n\t32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632\n\nAllocating group tables:  0/80\b\b\b\b\b     \b\b\b\b\bdone                            \nWriting inode tables:  0/80\b\b\b\b\b     \b\b\b\b\bdone                            \nCreating journal (16384 blocks): done\nWriting superblocks and filesystem accounting information:  0/80\b\b\b\b\b     \b\b\b\b\bdone\n\n"
time="2022-05-21T12:24:40.383084820Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:40.383528527Z" level=debug msg="(*service).Write started" expected="sha256:dfd5ae2430bfdaa3eabe80a09fef72b7b1b34a9b5ffe7690b3822cdad290cba5" ref="layer-sha256:dfd5ae2430bfdaa3eabe80a09fef72b7b1b34a9b5ffe7690b3822cdad290cba5" total=55723509
time="2022-05-21T12:24:40.383542864Z" level=debug msg="(*service).Write started" expected="sha256:614456ff946738237eb1d5e7ddb9b3b9578292cd2de96317aa37d76ea0a4eea9" ref="layer-sha256:614456ff946738237eb1d5e7ddb9b3b9578292cd2de96317aa37d76ea0a4eea9" total=185738
time="2022-05-21T12:24:40.383631712Z" level=debug msg="(*service).Write started" expected="sha256:72c1fa02b2c870da7fd4c4a0af11b837cd448185b4ff31f3ced4c1e11199d743" ref="layer-sha256:72c1fa02b2c870da7fd4c4a0af11b837cd448185b4ff31f3ced4c1e11199d743" total=248074790
time="2022-05-21T12:24:40.383655687Z" level=debug msg="(*service).Write started" expected="sha256:ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28" ref="layer-sha256:ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28" total=2065537
time="2022-05-21T12:24:40.383684742Z" level=debug msg="(*service).Write started" expected="sha256:466a9644be5453fb0268d102159dd91b988e5d24f84431d0a5a57ee7ff21de2b" ref="layer-sha256:466a9644be5453fb0268d102159dd91b988e5d24f84431d0a5a57ee7ff21de2b" total=3742
time="2022-05-21T12:24:40.383715099Z" level=debug msg="(*service).Write started" expected="sha256:964f5a9ea2070018f381d9c968d435bc4576497232bd7d3e79121b180ef2169a" ref="layer-sha256:964f5a9ea2070018f381d9c968d435bc4576497232bd7d3e79121b180ef2169a" total=125
time="2022-05-21T12:24:40.383724968Z" level=debug msg="(*service).Write started" expected="sha256:95853ec29c67ccc835034ef04f5765d1064b835ffb476e2a073dbb8e7b3d7cf3" ref="layer-sha256:95853ec29c67ccc835034ef04f5765d1064b835ffb476e2a073dbb8e7b3d7cf3" total=87932348
time="2022-05-21T12:24:40.383854722Z" level=debug msg="(*service).Write started" expected="sha256:f6993a2cb9082ebcb2d8d151f19a1137ebbe7c642e8a3c41aac38f816c15c4c7" ref="layer-sha256:f6993a2cb9082ebcb2d8d151f19a1137ebbe7c642e8a3c41aac38f816c15c4c7" total=98
time="2022-05-21T12:24:40.541633938Z" level=debug msg="(*service).Write started" expected="sha256:3691e79f01ef2ba64a855ef7621b04b3dbb0b4c689d27ebaa8644d4cb1a7e28f" ref="config-sha256:3691e79f01ef2ba64a855ef7621b04b3dbb0b4c689d27ebaa8644d4cb1a7e28f" total=8312
time="2022-05-21T12:24:41.315518049Z" level=debug msg="stat snapshot" key="sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:41.315847150Z" level=debug msg="prepare snapshot" key="extract-315698480-u59L sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10" parent=
time="2022-05-21T12:24:41.316209544Z" level=debug msg=prepare key="firecracker-containerd/2/extract-315698480-u59L sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10" parent=
time="2022-05-21T12:24:41.316272633Z" level=debug msg="creating new thin device 'fc-dev-thinpool-snap-2'"
time="2022-05-21T12:24:41.380191672Z" level=debug msg="mkfs.ext4 -E nodiscard,lazy_itable_init=0,lazy_journal_init=0 /dev/mapper/fc-dev-thinpool-snap-2"
time="2022-05-21T12:24:41.446436168Z" level=debug msg="using pigz for decompression"
time="2022-05-21T12:24:42.197733236Z" level=debug msg="mkfs:\nmke2fs 1.45.5 (07-Jan-2020)\nCreating filesystem with 2621440 4k blocks and 655360 inodes\nFilesystem UUID: ad64cd7c-02b2-40d9-b823-7e1ce89b8078\nSuperblock backups stored on blocks: \n\t32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632\n\nAllocating group tables:  0/80\b\b\b\b\b     \b\b\b\b\bdone                            \nWriting inode tables:  0/80\b\b\b\b\b     \b\b\b\b\bdone                            \nCreating journal (16384 blocks): done\nWriting superblocks and filesystem accounting information:  0/80\b\b\b\b\b     \b\b\b\b\bdone\n\n"
time="2022-05-21T12:24:42.218491271Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:42.218951870Z" level=debug msg="(*service).Write started" expected="sha256:d02232cde789f60bfefd38a38c22df68cb75f0b4a6e17f11876650bc1845acaf" ref="layer-sha256:d02232cde789f60bfefd38a38c22df68cb75f0b4a6e17f11876650bc1845acaf" total=637463
time="2022-05-21T12:24:42.218985574Z" level=debug msg="(*service).Write started" expected="sha256:00a47c8ade3f6bcd1061541f4387e56d3fcba420f67b4234ade01d51635572f4" ref="layer-sha256:00a47c8ade3f6bcd1061541f4387e56d3fcba420f67b4234ade01d51635572f4" total=2435
time="2022-05-21T12:24:42.219006212Z" level=debug msg="(*service).Write started" expected="sha256:407da27a03363f6b9d368ec6e131f7f2db7c8cb2a149160d913d6f3698905a5d" ref="layer-sha256:407da27a03363f6b9d368ec6e131f7f2db7c8cb2a149160d913d6f3698905a5d" total=1887472
time="2022-05-21T12:24:42.219067338Z" level=debug msg="(*service).Write started" expected="sha256:61614c1a5710c76af6b2a9c7170a81eb0dd76ccf90e921abc0c9dcc1d5ed490e" ref="layer-sha256:61614c1a5710c76af6b2a9c7170a81eb0dd76ccf90e921abc0c9dcc1d5ed490e" total=31506523
time="2022-05-21T12:24:42.219040487Z" level=debug msg="(*service).Write started" expected="sha256:c9b1b535fdd91a9855fb7f82348177e5f019329a58c53c47272962dd60f71fc9" ref="layer-sha256:c9b1b535fdd91a9855fb7f82348177e5f019329a58c53c47272962dd60f71fc9" total=2802957
time="2022-05-21T12:24:42.219047070Z" level=debug msg="(*service).Write started" expected="sha256:2cc5ad85d9abaadf23d5ae53c3f32e7ccb2df1956869980bfd2491ff396d348a" ref="layer-sha256:2cc5ad85d9abaadf23d5ae53c3f32e7ccb2df1956869980bfd2491ff396d348a" total=301261
time="2022-05-21T12:24:42.219080272Z" level=debug msg="(*service).Write started" expected="sha256:0522d30cde10ac29ae2c555b9bde76c2b50aafc7ef7435bbc7e19de706bcadcd" ref="layer-sha256:0522d30cde10ac29ae2c555b9bde76c2b50aafc7ef7435bbc7e19de706bcadcd" total=230
time="2022-05-21T12:24:42.219102684Z" level=debug msg="(*service).Write started" expected="sha256:adc08e00a651383f0333647c65bedddc8826225b3a3d8da06c4f8e678f935b71" ref="layer-sha256:adc08e00a651383f0333647c65bedddc8826225b3a3d8da06c4f8e678f935b71" total=20418883
time="2022-05-21T12:24:42.266679448Z" level=debug msg="diff applied" d=820.460239ms digest="sha256:ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=2065537
time="2022-05-21T12:24:42.266997488Z" level=debug msg="commit snapshot" key="extract-416064543-vxuX sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215" name="sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215"
time="2022-05-21T12:24:42.267152089Z" level=debug msg=commit key="firecracker-containerd/1/extract-416064543-vxuX sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215" name="firecracker-containerd/3/sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215"
time="2022-05-21T12:24:42.369148974Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:42.370149522Z" level=debug msg="stat snapshot" key="sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6"
time="2022-05-21T12:24:42.370489393Z" level=debug msg="prepare snapshot" key="extract-370322448-tHLo sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6" parent="sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215"
time="2022-05-21T12:24:42.370884388Z" level=debug msg=prepare key="firecracker-containerd/4/extract-370322448-tHLo sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6" parent="firecracker-containerd/3/sha256:cd7100a72410606589a54b932cabd804a17f9ae5b42a1882bd56d263e02b6215"
time="2022-05-21T12:24:42.370939652Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-3' from 'fc-dev-thinpool-snap-1'"
time="2022-05-21T12:24:42.448962736Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:43.291468091Z" level=debug msg="diff applied" d=90.261239ms digest="sha256:c9b1b535fdd91a9855fb7f82348177e5f019329a58c53c47272962dd60f71fc9" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=2802957
time="2022-05-21T12:24:43.291974076Z" level=debug msg="commit snapshot" key="extract-315698480-u59L sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10" name="sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:43.292098861Z" level=debug msg=commit key="firecracker-containerd/2/extract-315698480-u59L sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10" name="firecracker-containerd/5/sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:43.378639782Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:43.380558221Z" level=debug msg="stat snapshot" key="sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:43.381242842Z" level=debug msg="prepare snapshot" key="extract-380898302-n1sU sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5" parent="sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:43.381640893Z" level=debug msg=prepare key="firecracker-containerd/6/extract-380898302-n1sU sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5" parent="firecracker-containerd/5/sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:43.381710374Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-4' from 'fc-dev-thinpool-snap-2'"
time="2022-05-21T12:24:43.455687766Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:43.517770432Z" level=debug msg="diff applied" d=61.720533ms digest="sha256:2cc5ad85d9abaadf23d5ae53c3f32e7ccb2df1956869980bfd2491ff396d348a" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=301261
time="2022-05-21T12:24:43.518169405Z" level=debug msg="commit snapshot" key="extract-380898302-n1sU sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5" name="sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:43.518363731Z" level=debug msg=commit key="firecracker-containerd/6/extract-380898302-n1sU sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5" name="firecracker-containerd/7/sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:43.585493207Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:43.587100829Z" level=debug msg="stat snapshot" key="sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:43.587699268Z" level=debug msg="prepare snapshot" key="extract-587398761-hPJS sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa" parent="sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:43.588068184Z" level=debug msg=prepare key="firecracker-containerd/8/extract-587398761-hPJS sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa" parent="firecracker-containerd/7/sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:43.588130952Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-5' from 'fc-dev-thinpool-snap-4'"
time="2022-05-21T12:24:43.645274792Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:46.176828369Z" level=debug msg="diff applied" d=1.479437422s digest="sha256:61614c1a5710c76af6b2a9c7170a81eb0dd76ccf90e921abc0c9dcc1d5ed490e" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=31506523
time="2022-05-21T12:24:46.177281684Z" level=debug msg="commit snapshot" key="extract-587398761-hPJS sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa" name="sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:46.177570749Z" level=debug msg=commit key="firecracker-containerd/8/extract-587398761-hPJS sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa" name="firecracker-containerd/9/sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:46.237554115Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:46.238576995Z" level=debug msg="stat snapshot" key="sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:46.238895235Z" level=debug msg="prepare snapshot" key="extract-238771492-CcnZ sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f" parent="sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:46.239264402Z" level=debug msg=prepare key="firecracker-containerd/10/extract-238771492-CcnZ sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f" parent="firecracker-containerd/9/sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:46.239337229Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-6' from 'fc-dev-thinpool-snap-5'"
time="2022-05-21T12:24:46.324387579Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:46.335863566Z" level=debug msg="diff applied" d=10.46531ms digest="sha256:0522d30cde10ac29ae2c555b9bde76c2b50aafc7ef7435bbc7e19de706bcadcd" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=230
time="2022-05-21T12:24:46.336144857Z" level=debug msg="commit snapshot" key="extract-238771492-CcnZ sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f" name="sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:46.336294279Z" level=debug msg=commit key="firecracker-containerd/10/extract-238771492-CcnZ sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f" name="firecracker-containerd/11/sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:46.422005696Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:46.423114728Z" level=debug msg="stat snapshot" key="sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:46.423647052Z" level=debug msg="prepare snapshot" key="extract-423391750-ND6S sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270" parent="sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:46.424063217Z" level=debug msg=prepare key="firecracker-containerd/12/extract-423391750-ND6S sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270" parent="firecracker-containerd/11/sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:46.424146594Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-7' from 'fc-dev-thinpool-snap-6'"
time="2022-05-21T12:24:46.512145636Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:46.664512796Z" level=debug msg="diff applied" d=151.860434ms digest="sha256:407da27a03363f6b9d368ec6e131f7f2db7c8cb2a149160d913d6f3698905a5d" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=1887472
time="2022-05-21T12:24:46.666326848Z" level=debug msg="commit snapshot" key="extract-423391750-ND6S sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270" name="sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:46.666727544Z" level=debug msg=commit key="firecracker-containerd/12/extract-423391750-ND6S sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270" name="firecracker-containerd/13/sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:46.729776543Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:46.730786668Z" level=debug msg="stat snapshot" key="sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc"
time="2022-05-21T12:24:46.731153410Z" level=debug msg="prepare snapshot" key="extract-730983379-ugnd sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc" parent="sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:46.731532916Z" level=debug msg=prepare key="firecracker-containerd/14/extract-730983379-ugnd sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc" parent="firecracker-containerd/13/sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:46.731620090Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-8' from 'fc-dev-thinpool-snap-7'"
time="2022-05-21T12:24:46.800469801Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:46.810384333Z" level=debug msg="diff applied" d=9.564301ms digest="sha256:00a47c8ade3f6bcd1061541f4387e56d3fcba420f67b4234ade01d51635572f4" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=2435
time="2022-05-21T12:24:46.810671014Z" level=debug msg="commit snapshot" key="extract-730983379-ugnd sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc" name="sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc"
time="2022-05-21T12:24:46.810856303Z" level=debug msg=commit key="firecracker-containerd/14/extract-730983379-ugnd sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc" name="firecracker-containerd/15/sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc"
time="2022-05-21T12:24:46.878248575Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:46.879355713Z" level=debug msg="stat snapshot" key="sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2"
time="2022-05-21T12:24:46.879695234Z" level=debug msg="prepare snapshot" key="extract-879544840-15tN sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2" parent="sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc"
time="2022-05-21T12:24:46.880134622Z" level=debug msg=prepare key="firecracker-containerd/16/extract-879544840-15tN sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2" parent="firecracker-containerd/15/sha256:cf144147664338bb0a5b647411ad7e6c14b7c87d243cc114550f3f1c07d80edc"
time="2022-05-21T12:24:46.880213501Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-9' from 'fc-dev-thinpool-snap-8'"
time="2022-05-21T12:24:46.963029707Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:47.571625831Z" level=debug msg="(*service).Write started" expected="sha256:12dc6715ed1a8306f246ceaf7742c09e38a52a79d17421e4a50d7e0e09fdbc25" ref="manifest-sha256:12dc6715ed1a8306f246ceaf7742c09e38a52a79d17421e4a50d7e0e09fdbc25" total=1998
time="2022-05-21T12:24:47.575516079Z" level=debug msg="diff applied" d=612.204631ms digest="sha256:adc08e00a651383f0333647c65bedddc8826225b3a3d8da06c4f8e678f935b71" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=20418883
time="2022-05-21T12:24:47.575876970Z" level=debug msg="commit snapshot" key="extract-879544840-15tN sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2" name="sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2"
time="2022-05-21T12:24:47.576081836Z" level=debug msg=commit key="firecracker-containerd/16/extract-879544840-15tN sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2" name="firecracker-containerd/17/sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2"
time="2022-05-21T12:24:47.646240867Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:47.647235584Z" level=debug msg="stat snapshot" key="sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:47.647640107Z" level=debug msg="prepare snapshot" key="extract-647453625-dkKb sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a" parent="sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2"
time="2022-05-21T12:24:47.648136062Z" level=debug msg=prepare key="firecracker-containerd/18/extract-647453625-dkKb sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a" parent="firecracker-containerd/17/sha256:66a1cfe2f05ed05d825c22c11c95890bcedc47070570e361a2afbd6fbade7ea2"
time="2022-05-21T12:24:47.648230160Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-10' from 'fc-dev-thinpool-snap-9'"
time="2022-05-21T12:24:47.728820467Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:47.761914041Z" level=debug msg="diff applied" d=32.634048ms digest="sha256:d02232cde789f60bfefd38a38c22df68cb75f0b4a6e17f11876650bc1845acaf" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=637463
time="2022-05-21T12:24:47.762254243Z" level=debug msg="commit snapshot" key="extract-647453625-dkKb sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a" name="sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:47.762417060Z" level=debug msg=commit key="firecracker-containerd/18/extract-647453625-dkKb sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a" name="firecracker-containerd/19/sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:47.833540320Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:47.835257118Z" level=debug msg="create image" name="ghcr.io/ease-lab/pyaes:var_workload" target="sha256:75d39d67fbb3ca85eb89ece0b38e24ab7dadb2fccf9576a00cd87588aad7c460"
time="2022-05-21T12:24:47.835625083Z" level=debug msg="event published" ns=firecracker-containerd topic=/images/create type=containerd.services.images.v1.ImageCreate
time="2022-05-21T12:24:47.856726997Z" level=debug msg="garbage collected" d="794.93µs"
time="2022-05-21T12:24:47.881668745Z" level=debug msg="create VM request: VMID:\"2\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:01\" HostDevName:\"2_tap\" IPConfig:<PrimaryAddr:\"190.128.0.3/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:24:47.881743926Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:47.881729820Z" level=debug msg="create VM request: VMID:\"3\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:02\" HostDevName:\"3_tap\" IPConfig:<PrimaryAddr:\"190.128.0.4/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:24:47.881768353Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:47.882022292Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=2
time="2022-05-21T12:24:47.882041147Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=3
time="2022-05-21T12:24:47.885870150Z" level=debug msg="(*service).Write started" expected="sha256:a87533385b75fd1d476d8a1112a0c5db953e3d5b44d4f9db814b1e2e6abb8734" ref="config-sha256:a87533385b75fd1d476d8a1112a0c5db953e3d5b44d4f9db814b1e2e6abb8734" total=8312
time="2022-05-21T12:24:47.931247184Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#2" pid=25750
time="2022-05-21T12:24:47.931611973Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=2
time="2022-05-21T12:24:47.931874027Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:24:47.935192767Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#3" pid=25751
time="2022-05-21T12:24:47.935557806Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=3
time="2022-05-21T12:24:47.935861148Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:24:47.943351868Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc000581a63 MemSizeMib:0xc000581a58 TrackDirtyPages:false VcpuCount:0xc000581a50}" runtime=aws.firecracker
time="2022-05-21T12:24:47.943512210Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.943534622Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:24:47.943785506Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.943802327Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#2/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:24:47.943925149Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#2/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.943939957Z" level=info msg="Attaching NIC 2_tap (hwaddr 02:FC:00:00:00:01) at index 1" runtime=aws.firecracker
time="2022-05-21T12:24:47.947339560Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc000a08f63 MemSizeMib:0xc000a08f58 TrackDirtyPages:false VcpuCount:0xc000a08f50}" runtime=aws.firecracker
time="2022-05-21T12:24:47.947481367Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.947500443Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:24:47.947727291Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.947753110Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#3/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:24:47.947890158Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#3/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.947904044Z" level=info msg="Attaching NIC 3_tap (hwaddr 02:FC:00:00:00:02) at index 1" runtime=aws.firecracker
time="2022-05-21T12:24:47.955223391Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.955240503Z" level=info msg="calling agent" runtime=aws.firecracker vmID=2
time="2022-05-21T12:24:47.960050847Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:47.960068099Z" level=info msg="calling agent" runtime=aws.firecracker vmID=3
time="2022-05-21T12:24:48.498068019Z" level=debug msg="stat snapshot" key="sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:48.498152759Z" level=debug msg=stat key="firecracker-containerd/5/sha256:5216338b40a7b96416b8b9858974bbe4acc3096ee60acbc4dfb1ee02aecceb10"
time="2022-05-21T12:24:48.498425353Z" level=debug msg="stat snapshot" key="sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:48.498453577Z" level=debug msg=stat key="firecracker-containerd/7/sha256:5edfa66f961ecdadabac1b15441c567a06631fd4cb8a197a2f0399644a3c18d5"
time="2022-05-21T12:24:48.498719518Z" level=debug msg="stat snapshot" key="sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:48.498802194Z" level=debug msg=stat key="firecracker-containerd/9/sha256:483d3bedbf62615f629bcfd167c0f2b45df7afe3ecb6b3f7ffa7ebe2dab70faa"
time="2022-05-21T12:24:48.499064048Z" level=debug msg="stat snapshot" key="sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:48.499105556Z" level=debug msg=stat key="firecracker-containerd/11/sha256:3adc9ed0c3e4c8e4fb839cc55b92b0fefb26459b3d03edcbe139e0bd097fca0f"
time="2022-05-21T12:24:48.499398219Z" level=debug msg="stat snapshot" key="sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:48.499427574Z" level=debug msg=stat key="firecracker-containerd/13/sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:48.499703344Z" level=debug msg="stat snapshot" key="sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784"
time="2022-05-21T12:24:48.500052072Z" level=debug msg="prepare snapshot" key="extract-499911517-YXoM sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784" parent="sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:48.500455102Z" level=debug msg=prepare key="firecracker-containerd/20/extract-499911517-YXoM sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784" parent="firecracker-containerd/13/sha256:7893763263404b202f7a20649fdab037cf7200d38d5a5e6fc35ced2c072df270"
time="2022-05-21T12:24:48.500545703Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-11' from 'fc-dev-thinpool-snap-7'"
time="2022-05-21T12:24:48.576685656Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:48.577123051Z" level=debug msg="(*service).Write started" expected="sha256:0d776ee02572ee50935002da7bd7fcda4a60be4a48c4ace5dd3216c327e6767a" ref="layer-sha256:0d776ee02572ee50935002da7bd7fcda4a60be4a48c4ace5dd3216c327e6767a" total=20365161
time="2022-05-21T12:24:48.577132068Z" level=debug msg="(*service).Write started" expected="sha256:c09d5cdb7367fff0d581bb8003e3520dc3e8bd78811dfb51c92df81e58c3a50d" ref="layer-sha256:c09d5cdb7367fff0d581bb8003e3520dc3e8bd78811dfb51c92df81e58c3a50d" total=637463
time="2022-05-21T12:24:48.577150192Z" level=debug msg="(*service).Write started" expected="sha256:92b614cff45fafd028cec952e0cb2584e8d931bf0321e7d14bfafdf7a50ac3fa" ref="layer-sha256:92b614cff45fafd028cec952e0cb2584e8d931bf0321e7d14bfafdf7a50ac3fa" total=2233
time="2022-05-21T12:24:48.655506586Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=2
time="2022-05-21T12:24:48.655959580Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:24:48.659591121Z" level=debug msg="prepare snapshot" key=2 parent="sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:48.660053503Z" level=debug msg=prepare key=firecracker-containerd/21/2 parent="firecracker-containerd/19/sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:48.660137451Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-12' from 'fc-dev-thinpool-snap-10'"
time="2022-05-21T12:24:48.660288676Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=3
time="2022-05-21T12:24:48.660603911Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:24:48.663921018Z" level=debug msg="prepare snapshot" key=3 parent="sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:48.664368713Z" level=debug msg=prepare key=firecracker-containerd/22/3 parent="firecracker-containerd/19/sha256:e172bf8795d813920c658a6772bb108238dc2cf13f1fc1ee1ac5c595c37da14a"
time="2022-05-21T12:24:48.745766203Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-13' from 'fc-dev-thinpool-snap-10'"
time="2022-05-21T12:24:48.746100884Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:48.748570453Z" level=debug msg="get snapshot mounts" key=2
time="2022-05-21T12:24:48.748648289Z" level=debug msg=mounts key=firecracker-containerd/21/2
time="2022-05-21T12:24:48.758402008Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:24:48.760085894Z" level=debug msg="get snapshot mounts" key=2
time="2022-05-21T12:24:48.760127803Z" level=debug msg=mounts key=firecracker-containerd/21/2
time="2022-05-21T12:24:48.779856177Z" level=debug msg="garbage collected" d="769.191µs"
time="2022-05-21T12:24:48.807328789Z" level=debug msg=StartShim runtime=aws.firecracker task_id=2
time="2022-05-21T12:24:48.807982271Z" level=debug msg="create VM request: VMID:\"2\" "
time="2022-05-21T12:24:48.808019231Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:48.808404187Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=2 vmID=2
time="2022-05-21T12:24:48.811229016Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:24:48.822032224Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:48.825356394Z" level=debug msg="get snapshot mounts" key=3
time="2022-05-21T12:24:48.825405937Z" level=debug msg=mounts key=firecracker-containerd/22/3
time="2022-05-21T12:24:48.832160168Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:24:48.833470099Z" level=debug msg="get snapshot mounts" key=3
time="2022-05-21T12:24:48.833522718Z" level=debug msg=mounts key=firecracker-containerd/22/3
time="2022-05-21T12:24:48.847583772Z" level=debug msg="garbage collected" d="862.737µs"
time="2022-05-21T12:24:48.875450717Z" level=debug msg=StartShim runtime=aws.firecracker task_id=3
time="2022-05-21T12:24:48.876062441Z" level=debug msg="create VM request: VMID:\"3\" "
time="2022-05-21T12:24:48.876086607Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:48.876296012Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=3 vmID=3
time="2022-05-21T12:24:48.878901356Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:24:48.912138521Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:24:48.928601586Z" level=info msg="successfully created task" ExecID= TaskID=2 pid_in_vm=720 runtime=aws.firecracker vmID=2
time="2022-05-21T12:24:48.934226826Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:24:48.975449154Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:24:48.994231583Z" level=info msg="successfully created task" ExecID= TaskID=3 pid_in_vm=719 runtime=aws.firecracker vmID=3
time="2022-05-21T12:24:49.000235839Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:24:49.237655906Z" level=debug msg="diff applied" d=19.129675ms digest="sha256:92b614cff45fafd028cec952e0cb2584e8d931bf0321e7d14bfafdf7a50ac3fa" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=2233
time="2022-05-21T12:24:49.238000666Z" level=debug msg="commit snapshot" key="extract-499911517-YXoM sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784" name="sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784"
time="2022-05-21T12:24:49.238177780Z" level=debug msg=commit key="firecracker-containerd/20/extract-499911517-YXoM sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784" name="firecracker-containerd/23/sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784"
time="2022-05-21T12:24:49.313224341Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:49.317903387Z" level=debug msg="stat snapshot" key="sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8"
time="2022-05-21T12:24:49.319639702Z" level=debug msg="prepare snapshot" key="extract-318339209-DMDW sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8" parent="sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784"
time="2022-05-21T12:24:49.320082337Z" level=debug msg=prepare key="firecracker-containerd/24/extract-318339209-DMDW sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8" parent="firecracker-containerd/23/sha256:e4226272b56cdeb334d6c3377374a6760ceae26f704fddedd5ff871e52d19784"
time="2022-05-21T12:24:49.320166776Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-14' from 'fc-dev-thinpool-snap-11'"
time="2022-05-21T12:24:49.411564581Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:51.009333002Z" level=debug msg="diff applied" d=607.995066ms digest="sha256:0d776ee02572ee50935002da7bd7fcda4a60be4a48c4ace5dd3216c327e6767a" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=20365161
time="2022-05-21T12:24:51.009737976Z" level=debug msg="commit snapshot" key="extract-318339209-DMDW sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8" name="sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8"
time="2022-05-21T12:24:51.009891575Z" level=debug msg=commit key="firecracker-containerd/24/extract-318339209-DMDW sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8" name="firecracker-containerd/25/sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8"
time="2022-05-21T12:24:51.085627988Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:51.086715970Z" level=debug msg="stat snapshot" key="sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253"
time="2022-05-21T12:24:51.087120152Z" level=debug msg="prepare snapshot" key="extract-86935985-FEyI sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253" parent="sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8"
time="2022-05-21T12:24:51.087529244Z" level=debug msg=prepare key="firecracker-containerd/26/extract-86935985-FEyI sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253" parent="firecracker-containerd/25/sha256:046f313f5e6a5160dd5c71cdf40aed78bb816bce9030b7b1617dc499a623dac8"
time="2022-05-21T12:24:51.087606750Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-15' from 'fc-dev-thinpool-snap-14'"
time="2022-05-21T12:24:51.184203071Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:51.216163338Z" level=debug msg="diff applied" d=31.496652ms digest="sha256:c09d5cdb7367fff0d581bb8003e3520dc3e8bd78811dfb51c92df81e58c3a50d" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=637463
time="2022-05-21T12:24:51.216566308Z" level=debug msg="commit snapshot" key="extract-86935985-FEyI sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253" name="sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253"
time="2022-05-21T12:24:51.216717884Z" level=debug msg=commit key="firecracker-containerd/26/extract-86935985-FEyI sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253" name="firecracker-containerd/27/sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253"
time="2022-05-21T12:24:51.297623808Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:24:51.299635522Z" level=debug msg="create image" name="ghcr.io/ease-lab/helloworld:var_workload" target="sha256:12dc6715ed1a8306f246ceaf7742c09e38a52a79d17421e4a50d7e0e09fdbc25"
time="2022-05-21T12:24:51.300012674Z" level=debug msg="event published" ns=firecracker-containerd topic=/images/create type=containerd.services.images.v1.ImageCreate
time="2022-05-21T12:24:51.334027246Z" level=debug msg="garbage collected" d="817.562µs"
time="2022-05-21T12:24:51.338659313Z" level=debug msg="create VM request: VMID:\"5\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:04\" HostDevName:\"5_tap\" IPConfig:<PrimaryAddr:\"190.128.0.6/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:24:51.338710590Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:51.339036134Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=5
time="2022-05-21T12:24:51.387511334Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#5" pid=26240
time="2022-05-21T12:24:51.387996819Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=5
time="2022-05-21T12:24:51.388381435Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:24:51.400447155Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc000713313 MemSizeMib:0xc000713308 TrackDirtyPages:false VcpuCount:0xc000713300}" runtime=aws.firecracker
time="2022-05-21T12:24:51.400619800Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:51.400655678Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:24:51.400959661Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:51.400997262Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#5/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:24:51.401149920Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#5/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:51.401175088Z" level=info msg="Attaching NIC 5_tap (hwaddr 02:FC:00:00:00:04) at index 1" runtime=aws.firecracker
time="2022-05-21T12:24:51.418241991Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:24:51.418258011Z" level=info msg="calling agent" runtime=aws.firecracker vmID=5
time="2022-05-21T12:24:52.118525353Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=5
time="2022-05-21T12:24:52.119064731Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:24:52.125840722Z" level=debug msg="prepare snapshot" key=5 parent="sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253"
time="2022-05-21T12:24:52.126585177Z" level=debug msg=prepare key=firecracker-containerd/28/5 parent="firecracker-containerd/27/sha256:21d1c32ff456f90efb6f819dc94981414274877bd50caee6b7866a3f65280253"
time="2022-05-21T12:24:52.126690075Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-16' from 'fc-dev-thinpool-snap-15'"
time="2022-05-21T12:24:52.197118965Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:24:52.201176990Z" level=debug msg="get snapshot mounts" key=5
time="2022-05-21T12:24:52.201443713Z" level=debug msg=mounts key=firecracker-containerd/28/5
time="2022-05-21T12:24:52.209478770Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:24:52.211163929Z" level=debug msg="get snapshot mounts" key=5
time="2022-05-21T12:24:52.211244871Z" level=debug msg=mounts key=firecracker-containerd/28/5
time="2022-05-21T12:24:52.243509282Z" level=debug msg="garbage collected" d="693.518µs"
time="2022-05-21T12:24:52.255266430Z" level=debug msg=StartShim runtime=aws.firecracker task_id=5
time="2022-05-21T12:24:52.255873806Z" level=debug msg="create VM request: VMID:\"5\" "
time="2022-05-21T12:24:52.255902810Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:24:52.256187858Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=5 vmID=5
time="2022-05-21T12:24:52.258642338Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:24:52.358603596Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:24:52.371484614Z" level=info msg="successfully created task" ExecID= TaskID=5 pid_in_vm=719 runtime=aws.firecracker vmID=5
time="2022-05-21T12:24:52.377259267Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:25:05.465954028Z" level=debug msg="diff applied" d=9.911367773s digest="sha256:72c1fa02b2c870da7fd4c4a0af11b837cd448185b4ff31f3ced4c1e11199d743" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=248074790
time="2022-05-21T12:25:05.466983841Z" level=debug msg="commit snapshot" key="extract-370322448-tHLo sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6" name="sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6"
time="2022-05-21T12:25:05.467261284Z" level=debug msg=commit key="firecracker-containerd/4/extract-370322448-tHLo sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6" name="firecracker-containerd/29/sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6"
time="2022-05-21T12:25:05.532879944Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:05.533905458Z" level=debug msg="stat snapshot" key="sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff"
time="2022-05-21T12:25:05.534309160Z" level=debug msg="prepare snapshot" key="extract-534135171-Gsvg sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff" parent="sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6"
time="2022-05-21T12:25:05.534652628Z" level=debug msg=prepare key="firecracker-containerd/30/extract-534135171-Gsvg sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff" parent="firecracker-containerd/29/sha256:e4e4f6845ea6130dbe3b08e769e3bbc16a9f0dfe037f0380c5123e9b0d9a34d6"
time="2022-05-21T12:25:05.534714715Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-17' from 'fc-dev-thinpool-snap-3'"
time="2022-05-21T12:25:05.624614447Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:07.800504585Z" level=debug msg="diff applied" d=2.175628435s digest="sha256:dfd5ae2430bfdaa3eabe80a09fef72b7b1b34a9b5ffe7690b3822cdad290cba5" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=55723509
time="2022-05-21T12:25:07.800953933Z" level=debug msg="commit snapshot" key="extract-534135171-Gsvg sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff" name="sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff"
time="2022-05-21T12:25:07.801127119Z" level=debug msg=commit key="firecracker-containerd/30/extract-534135171-Gsvg sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff" name="firecracker-containerd/31/sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff"
time="2022-05-21T12:25:07.873595380Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:07.874943213Z" level=debug msg="stat snapshot" key="sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442"
time="2022-05-21T12:25:07.875273877Z" level=debug msg="prepare snapshot" key="extract-875142168-cJsH sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442" parent="sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff"
time="2022-05-21T12:25:07.875651739Z" level=debug msg=prepare key="firecracker-containerd/32/extract-875142168-cJsH sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442" parent="firecracker-containerd/31/sha256:761a2f6827609b34a6ef9b77073d68620598e904e91c550f9f0760a5da0246ff"
time="2022-05-21T12:25:07.875742481Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-18' from 'fc-dev-thinpool-snap-17'"
time="2022-05-21T12:25:07.965547635Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:07.975635214Z" level=debug msg="diff applied" d=9.750623ms digest="sha256:f6993a2cb9082ebcb2d8d151f19a1137ebbe7c642e8a3c41aac38f816c15c4c7" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=98
time="2022-05-21T12:25:07.976126410Z" level=debug msg="commit snapshot" key="extract-875142168-cJsH sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442" name="sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442"
time="2022-05-21T12:25:07.976418922Z" level=debug msg=commit key="firecracker-containerd/32/extract-875142168-cJsH sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442" name="firecracker-containerd/33/sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442"
time="2022-05-21T12:25:08.054021408Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:08.055162621Z" level=debug msg="stat snapshot" key="sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c"
time="2022-05-21T12:25:08.055483496Z" level=debug msg="prepare snapshot" key="extract-55342741-WAg_ sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c" parent="sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442"
time="2022-05-21T12:25:08.055931842Z" level=debug msg=prepare key="firecracker-containerd/34/extract-55342741-WAg_ sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c" parent="firecracker-containerd/33/sha256:23ee52c98db3041bb984ad19b6b1791ec210dbeb9ee80380a78cd9214f54f442"
time="2022-05-21T12:25:08.056025238Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-19' from 'fc-dev-thinpool-snap-18'"
time="2022-05-21T12:25:08.143518692Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:08.152801643Z" level=debug msg="diff applied" d=8.977835ms digest="sha256:964f5a9ea2070018f381d9c968d435bc4576497232bd7d3e79121b180ef2169a" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=125
time="2022-05-21T12:25:08.153114002Z" level=debug msg="commit snapshot" key="extract-55342741-WAg_ sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c" name="sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c"
time="2022-05-21T12:25:08.153267552Z" level=debug msg=commit key="firecracker-containerd/34/extract-55342741-WAg_ sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c" name="firecracker-containerd/35/sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c"
time="2022-05-21T12:25:08.229321577Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:08.230220162Z" level=debug msg="stat snapshot" key="sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29"
time="2022-05-21T12:25:08.230507724Z" level=debug msg="prepare snapshot" key="extract-230381326-YdN1 sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29" parent="sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c"
time="2022-05-21T12:25:08.230851603Z" level=debug msg=prepare key="firecracker-containerd/36/extract-230381326-YdN1 sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29" parent="firecracker-containerd/35/sha256:323a98f7b4e8d733d95c457ac33ab2230a84bd409f7091912e007056fcee664c"
time="2022-05-21T12:25:08.230924581Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-20' from 'fc-dev-thinpool-snap-19'"
time="2022-05-21T12:25:08.286350614Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:10.949871578Z" level=debug msg="diff applied" d=2.663278636s digest="sha256:95853ec29c67ccc835034ef04f5765d1064b835ffb476e2a073dbb8e7b3d7cf3" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=87932348
time="2022-05-21T12:25:10.950602266Z" level=debug msg="commit snapshot" key="extract-230381326-YdN1 sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29" name="sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29"
time="2022-05-21T12:25:10.950969719Z" level=debug msg=commit key="firecracker-containerd/36/extract-230381326-YdN1 sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29" name="firecracker-containerd/37/sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29"
time="2022-05-21T12:25:11.006100937Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:11.007345605Z" level=debug msg="stat snapshot" key="sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf"
time="2022-05-21T12:25:11.007767130Z" level=debug msg="prepare snapshot" key="extract-7616776-vAK0 sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf" parent="sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29"
time="2022-05-21T12:25:11.008122881Z" level=debug msg=prepare key="firecracker-containerd/38/extract-7616776-vAK0 sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf" parent="firecracker-containerd/37/sha256:a5dcf1102c49ed5747aa91058a2f763bb7ce8577819ad128c1b2d94c0d306c29"
time="2022-05-21T12:25:11.008209033Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-21' from 'fc-dev-thinpool-snap-20'"
time="2022-05-21T12:25:11.081823557Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:11.092862250Z" level=debug msg="diff applied" d=10.610355ms digest="sha256:466a9644be5453fb0268d102159dd91b988e5d24f84431d0a5a57ee7ff21de2b" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=3742
time="2022-05-21T12:25:11.093137680Z" level=debug msg="commit snapshot" key="extract-7616776-vAK0 sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf" name="sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf"
time="2022-05-21T12:25:11.093358777Z" level=debug msg=commit key="firecracker-containerd/38/extract-7616776-vAK0 sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf" name="firecracker-containerd/39/sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf"
time="2022-05-21T12:25:11.141590750Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:11.142603100Z" level=debug msg="stat snapshot" key="sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:11.142989929Z" level=debug msg="prepare snapshot" key="extract-142791475-HfT5 sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb" parent="sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf"
time="2022-05-21T12:25:11.143465817Z" level=debug msg=prepare key="firecracker-containerd/40/extract-142791475-HfT5 sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb" parent="firecracker-containerd/39/sha256:4774be23f537f9f04d55671855857893520107cc5afbe9c2aaf9d0396d10dfaf"
time="2022-05-21T12:25:11.143588078Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-22' from 'fc-dev-thinpool-snap-21'"
time="2022-05-21T12:25:11.217970620Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:11.230484336Z" level=debug msg="diff applied" d=12.25088ms digest="sha256:614456ff946738237eb1d5e7ddb9b3b9578292cd2de96317aa37d76ea0a4eea9" media=application/vnd.docker.image.rootfs.diff.tar.gzip size=185738
time="2022-05-21T12:25:11.230799661Z" level=debug msg="commit snapshot" key="extract-142791475-HfT5 sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb" name="sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:11.230985772Z" level=debug msg=commit key="firecracker-containerd/40/extract-142791475-HfT5 sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb" name="firecracker-containerd/41/sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:11.281575282Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/commit type=containerd.events.SnapshotCommit
time="2022-05-21T12:25:11.283471699Z" level=debug msg="create image" name="docker.io/vhiveease/rnn_serving:var_workload" target="sha256:6a11e6dbd88b1ce1ebb284c769b52e3fdb66a0f37b392bded5612045ff2cae61"
time="2022-05-21T12:25:11.283876082Z" level=debug msg="event published" ns=firecracker-containerd topic=/images/create type=containerd.services.images.v1.ImageCreate
time="2022-05-21T12:25:11.301886687Z" level=debug msg="garbage collected" d="742.38µs"
time="2022-05-21T12:25:11.326557575Z" level=debug msg="create VM request: VMID:\"1\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:00\" HostDevName:\"1_tap\" IPConfig:<PrimaryAddr:\"190.128.0.2/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:25:11.326598613Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:25:11.326842783Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=1
time="2022-05-21T12:25:11.328074196Z" level=debug msg="create VM request: VMID:\"4\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:03\" HostDevName:\"4_tap\" IPConfig:<PrimaryAddr:\"190.128.0.5/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:25:11.328115444Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:25:11.328280325Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=4
time="2022-05-21T12:25:11.375283410Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#4" pid=26819
time="2022-05-21T12:25:11.375582144Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=4
time="2022-05-21T12:25:11.375829260Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:25:11.379299967Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#1" pid=26818
time="2022-05-21T12:25:11.379573272Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=1
time="2022-05-21T12:25:11.379801954Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:25:11.387199077Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc00093a873 MemSizeMib:0xc00093a868 TrackDirtyPages:false VcpuCount:0xc00093a860}" runtime=aws.firecracker
time="2022-05-21T12:25:11.387356214Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.387386721Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:25:11.387637324Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.387667281Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#4/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:25:11.387785674Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#4/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.387801133Z" level=info msg="Attaching NIC 4_tap (hwaddr 02:FC:00:00:00:03) at index 1" runtime=aws.firecracker
time="2022-05-21T12:25:11.391162384Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc000d0cf53 MemSizeMib:0xc000d0cf48 TrackDirtyPages:false VcpuCount:0xc000d0cf40}" runtime=aws.firecracker
time="2022-05-21T12:25:11.391331583Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.391349527Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:25:11.391574912Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.391592405Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#1/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:25:11.391722781Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#1/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.391744702Z" level=info msg="Attaching NIC 1_tap (hwaddr 02:FC:00:00:00:00) at index 1" runtime=aws.firecracker
time="2022-05-21T12:25:11.399102942Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.399119804Z" level=info msg="calling agent" runtime=aws.firecracker vmID=4
time="2022-05-21T12:25:11.402984144Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:25:11.402997218Z" level=info msg="calling agent" runtime=aws.firecracker vmID=1
time="2022-05-21T12:25:12.099358622Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=4
time="2022-05-21T12:25:12.099812858Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:25:12.103224644Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=1
time="2022-05-21T12:25:12.103299526Z" level=debug msg="prepare snapshot" key=4 parent="sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:12.103626583Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:25:12.103741279Z" level=debug msg=prepare key=firecracker-containerd/42/4 parent="firecracker-containerd/41/sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:12.103853942Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-23' from 'fc-dev-thinpool-snap-22'"
time="2022-05-21T12:25:12.112161052Z" level=debug msg="prepare snapshot" key=1 parent="sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:12.112740785Z" level=debug msg=prepare key=firecracker-containerd/43/1 parent="firecracker-containerd/41/sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:25:12.172554586Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-24' from 'fc-dev-thinpool-snap-22'"
time="2022-05-21T12:25:12.172825998Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:12.175410644Z" level=debug msg="get snapshot mounts" key=4
time="2022-05-21T12:25:12.175462131Z" level=debug msg=mounts key=firecracker-containerd/42/4
time="2022-05-21T12:25:12.184505791Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:25:12.185769685Z" level=debug msg="get snapshot mounts" key=4
time="2022-05-21T12:25:12.185831982Z" level=debug msg=mounts key=firecracker-containerd/42/4
time="2022-05-21T12:25:12.204733808Z" level=debug msg="garbage collected" d="671.056µs"
time="2022-05-21T12:25:12.227284136Z" level=debug msg=StartShim runtime=aws.firecracker task_id=4
time="2022-05-21T12:25:12.227854773Z" level=debug msg="create VM request: VMID:\"4\" "
time="2022-05-21T12:25:12.227882195Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:25:12.228234149Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=4 vmID=4
time="2022-05-21T12:25:12.230708767Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:25:12.245934257Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:25:12.248037234Z" level=debug msg="get snapshot mounts" key=1
time="2022-05-21T12:25:12.248084524Z" level=debug msg=mounts key=firecracker-containerd/43/1
time="2022-05-21T12:25:12.255745936Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:25:12.257099700Z" level=debug msg="get snapshot mounts" key=1
time="2022-05-21T12:25:12.257167918Z" level=debug msg=mounts key=firecracker-containerd/43/1
time="2022-05-21T12:25:12.276758834Z" level=debug msg="garbage collected" d="755.656µs"
time="2022-05-21T12:25:12.299323429Z" level=debug msg=StartShim runtime=aws.firecracker task_id=1
time="2022-05-21T12:25:12.299736608Z" level=debug msg="create VM request: VMID:\"1\" "
time="2022-05-21T12:25:12.299759562Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:25:12.300034340Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=1 vmID=1
time="2022-05-21T12:25:12.302584190Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:25:12.331164844Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:25:12.342487943Z" level=info msg="successfully created task" ExecID= TaskID=4 pid_in_vm=720 runtime=aws.firecracker vmID=4
time="2022-05-21T12:25:12.348046769Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:25:12.403047171Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:25:12.418334398Z" level=info msg="successfully created task" ExecID= TaskID=1 pid_in_vm=719 runtime=aws.firecracker vmID=1
time="2022-05-21T12:25:12.424258152Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:34:36.120304941Z" level=debug msg="create VM request: VMID:\"6\" MachineCfg:<MemSizeMib:256 VcpuCount:1 > KernelArgs:\"ro noapic reboot=k panic=1 pci=off nomodules systemd.log_color=false systemd.unit=firecracker.target init=/sbin/overlay-init tsc=reliable quiet 8250.nr_uarts=0 ipv6.disable=1\" NetworkInterfaces:<StaticConfig:<MacAddress:\"02:FC:00:00:00:05\" HostDevName:\"6_tap\" IPConfig:<PrimaryAddr:\"190.128.0.7/10\" GatewayAddr:\"190.128.0.1\" Nameservers:\"8.8.8.8\" > > > TimeoutSeconds:100 OffloadEnabled:true "
time="2022-05-21T12:34:36.120395703Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:34:36.120770030Z" level=debug msg="starting containerd-shim-aws-firecracker" vmID=6
time="2022-05-21T12:34:36.171393829Z" level=info msg="starting signal loop" namespace=firecracker-containerd path="/var/lib/firecracker-containerd/shim-base/firecracker-containerd#6" pid=31898
time="2022-05-21T12:34:36.171806810Z" level=info msg="creating new VM" runtime=aws.firecracker vmID=6
time="2022-05-21T12:34:36.172126354Z" level=info msg="Called startVMM(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-21T12:34:36.184094713Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc00038f113 MemSizeMib:0xc00038f0c8 TrackDirtyPages:false VcpuCount:0xc00038f0c0}" runtime=aws.firecracker
time="2022-05-21T12:34:36.184324107Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-21T12:34:36.184367880Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.firecracker
time="2022-05-21T12:34:36.184666274Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:34:36.184689628Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#6/ctrstub0, slot MN2HE43UOVRDA, root false." runtime=aws.firecracker
time="2022-05-21T12:34:36.184835213Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#6/ctrstub0: [PUT /drives/{drive_id}][204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-21T12:34:36.184853899Z" level=info msg="Attaching NIC 6_tap (hwaddr 02:FC:00:00:00:05) at index 1" runtime=aws.firecracker
time="2022-05-21T12:34:36.202232338Z" level=info msg="startInstance successful: [PUT /actions][204] createSyncActionNoContent " runtime=aws.firecracker
time="2022-05-21T12:34:36.202275319Z" level=info msg="calling agent" runtime=aws.firecracker vmID=6
time="2022-05-21T12:34:36.902597734Z" level=info msg="successfully started the VM" runtime=aws.firecracker vmID=6
time="2022-05-21T12:34:36.903160368Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/start type=VMStart
time="2022-05-21T12:34:36.906864862Z" level=debug msg="prepare snapshot" key=6 parent="sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:34:36.907205697Z" level=debug msg=prepare key=firecracker-containerd/44/6 parent="firecracker-containerd/41/sha256:d7dccd214b2b808d39d11264689977e780b9e10662398cdae4fdc734fd008cdb"
time="2022-05-21T12:34:36.907301538Z" level=debug msg="creating snapshot device 'fc-dev-thinpool-snap-25' from 'fc-dev-thinpool-snap-22'"
time="2022-05-21T12:34:36.989814281Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/prepare type=containerd.events.SnapshotPrepare
time="2022-05-21T12:34:36.991850331Z" level=debug msg="get snapshot mounts" key=6
time="2022-05-21T12:34:36.991915985Z" level=debug msg=mounts key=firecracker-containerd/44/6
time="2022-05-21T12:34:36.999427017Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/create type=containerd.events.ContainerCreate
time="2022-05-21T12:34:37.000632887Z" level=debug msg="get snapshot mounts" key=6
time="2022-05-21T12:34:37.000692700Z" level=debug msg=mounts key=firecracker-containerd/44/6
time="2022-05-21T12:34:37.027685728Z" level=debug msg="garbage collected" d="831.743µs"
time="2022-05-21T12:34:37.043377407Z" level=debug msg=StartShim runtime=aws.firecracker task_id=6
time="2022-05-21T12:34:37.043960820Z" level=debug msg="create VM request: VMID:\"6\" "
time="2022-05-21T12:34:37.043999754Z" level=debug msg="using namespace: firecracker-containerd"
time="2022-05-21T12:34:37.044295763Z" level=info msg="successfully started shim (git commit: 19c96c059d7a95e8eb7f27b4e2847c4a84898698)." runtime=aws.firecracker task_id=6 vmID=6
time="2022-05-21T12:34:37.046486375Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:34:37.145872412Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/create type=containerd.events.TaskCreate
time="2022-05-21T12:34:37.163495884Z" level=info msg="successfully created task" ExecID= TaskID=6 pid_in_vm=720 runtime=aws.firecracker vmID=6
time="2022-05-21T12:34:37.169413153Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/start type=containerd.events.TaskStart
time="2022-05-21T12:35:33.515152090Z" level=info msg=exited ExecID= TaskID=2 exit_status=137 exited_at="2022-05-21 12:35:33.499631242 +0000 UTC" runtime=aws.firecracker vmID=2
time="2022-05-21T12:35:33.515258822Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=2 runtime=aws.firecracker stream=stdin vmID=2
time="2022-05-21T12:35:33.515247701Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:35:33.515314667Z" level=error msg="error closing io stream" ExecID= TaskID=2 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=2
time="2022-05-21T12:35:34.023171185Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:35:34.029031355Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:35:34.029539656Z" level=info msg="shim disconnected" id=2
time="2022-05-21T12:35:34.029643773Z" level=warning msg="cleaning up after shim disconnected" id=2 namespace=firecracker-containerd
time="2022-05-21T12:35:34.029659402Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:35:34.525450707Z" level=info msg=exited ExecID= TaskID=4 exit_status=137 exited_at="2022-05-21 12:35:34.510419647 +0000 UTC" runtime=aws.firecracker vmID=4
time="2022-05-21T12:35:34.525547219Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=4 runtime=aws.firecracker stream=stdin vmID=4
time="2022-05-21T12:35:34.525598526Z" level=error msg="error closing io stream" ExecID= TaskID=4 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=4
time="2022-05-21T12:35:34.525646237Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:35:34.536096345Z" level=info msg=exited ExecID= TaskID=5 exit_status=137 exited_at="2022-05-21 12:35:34.52122283 +0000 UTC" runtime=aws.firecracker vmID=5
time="2022-05-21T12:35:34.536129748Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:35:34.536205812Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=5 runtime=aws.firecracker stream=stdin vmID=5
time="2022-05-21T12:35:34.536276846Z" level=error msg="error closing io stream" ExecID= TaskID=5 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=5
time="2022-05-21T12:35:35.034033817Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:35:35.043916212Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:35:35.044094730Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:35:35.044436526Z" level=info msg="shim disconnected" id=4
time="2022-05-21T12:35:35.044473245Z" level=warning msg="cleaning up after shim disconnected" id=4 namespace=firecracker-containerd
time="2022-05-21T12:35:35.044485418Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:35:35.050067843Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:35:35.050438454Z" level=info msg="shim disconnected" id=5
time="2022-05-21T12:35:35.050492075Z" level=warning msg="cleaning up after shim disconnected" id=5 namespace=firecracker-containerd
time="2022-05-21T12:35:35.050505430Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:35:37.072952376Z" level=info msg=exited ExecID= TaskID=1 exit_status=137 exited_at="2022-05-21 12:35:37.057779967 +0000 UTC" runtime=aws.firecracker vmID=1
time="2022-05-21T12:35:37.073011828Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=1 runtime=aws.firecracker stream=stdin vmID=1
time="2022-05-21T12:35:37.073058046Z" level=error msg="error closing io stream" ExecID= TaskID=1 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=1
time="2022-05-21T12:35:37.073077032Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:35:37.082337741Z" level=info msg=exited ExecID= TaskID=3 exit_status=137 exited_at="2022-05-21 12:35:37.066925921 +0000 UTC" runtime=aws.firecracker vmID=3
time="2022-05-21T12:35:37.082409727Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=3 runtime=aws.firecracker stream=stdin vmID=3
time="2022-05-21T12:35:37.082431398Z" level=error msg="error closing io stream" ExecID= TaskID=3 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=3
time="2022-05-21T12:35:37.082454582Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:35:37.581306291Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:35:37.589649115Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:35:37.591723026Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:35:37.592051978Z" level=info msg="shim disconnected" id=1
time="2022-05-21T12:35:37.592108655Z" level=warning msg="cleaning up after shim disconnected" id=1 namespace=firecracker-containerd
time="2022-05-21T12:35:37.592118524Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:35:37.596143814Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:35:37.596375042Z" level=info msg="shim disconnected" id=3
time="2022-05-21T12:35:37.596425156Z" level=warning msg="cleaning up after shim disconnected" id=3 namespace=firecracker-containerd
time="2022-05-21T12:35:37.596432901Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:35:38.515280086Z" level=error msg="error closing io stream" ExecID= TaskID=2 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=2
time="2022-05-21T12:35:38.515270889Z" level=error msg="error closing io stream" ExecID= TaskID=2 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=2
time="2022-05-21T12:35:39.077965525Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 2 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/2 delete" error="exit status 1"
time="2022-05-21T12:35:39.078094950Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=2 namespace=firecracker-containerd
time="2022-05-21T12:35:39.079736584Z" level=debug msg="remove snapshot" key=2
time="2022-05-21T12:35:39.080298125Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:35:39.080821836Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:35:39.081206864Z" level=debug msg="stop VM: VMID:\"2\" "
time="2022-05-21T12:35:39.081553950Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=2
time="2022-05-21T12:35:39.090562622Z" level=debug msg="schedule snapshotter cleanup" snapshotter=devmapper
time="2022-05-21T12:35:39.090651170Z" level=debug msg=walk
time="2022-05-21T12:35:39.090852350Z" level=debug msg=remove key=firecracker-containerd/21/2
time="2022-05-21T12:35:39.193934417Z" level=debug msg="removed snapshot" key=firecracker-containerd/21/2 snapshotter=devmapper
time="2022-05-21T12:35:39.193956018Z" level=debug msg=cleanup
time="2022-05-21T12:35:39.193964594Z" level=debug msg="snapshot garbage collected" d=103.357858ms snapshotter=devmapper
time="2022-05-21T12:35:39.193983350Z" level=debug msg="garbage collected" d="883.059µs"
time="2022-05-21T12:35:39.499060469Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:35:39.499663128Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:35:39.525661724Z" level=error msg="error closing io stream" ExecID= TaskID=4 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=4
time="2022-05-21T12:35:39.525652867Z" level=error msg="error closing io stream" ExecID= TaskID=4 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=4
time="2022-05-21T12:35:39.536259771Z" level=error msg="error closing io stream" ExecID= TaskID=5 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=5
time="2022-05-21T12:35:39.536259741Z" level=error msg="error closing io stream" ExecID= TaskID=5 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=5
time="2022-05-21T12:35:40.141582173Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 4 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/4 delete" error="exit status 1"
time="2022-05-21T12:35:40.141716307Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=4 namespace=firecracker-containerd
time="2022-05-21T12:35:40.141557086Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 5 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/5 delete" error="exit status 1"
time="2022-05-21T12:35:40.141825333Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=5 namespace=firecracker-containerd
time="2022-05-21T12:35:40.143440216Z" level=debug msg="remove snapshot" key=4
time="2022-05-21T12:35:40.143464943Z" level=debug msg="remove snapshot" key=5
time="2022-05-21T12:35:40.144007869Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:35:40.144447070Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:35:40.144839111Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:35:40.145152784Z" level=debug msg="stop VM: VMID:\"4\" "
time="2022-05-21T12:35:40.145205604Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:35:40.145438494Z" level=debug msg="stop VM: VMID:\"5\" "
time="2022-05-21T12:35:40.145482488Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=4
time="2022-05-21T12:35:40.145822941Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=5
time="2022-05-21T12:35:40.155076697Z" level=debug msg="schedule snapshotter cleanup" snapshotter=devmapper
time="2022-05-21T12:35:40.155138284Z" level=debug msg=walk
time="2022-05-21T12:35:40.155298086Z" level=debug msg=remove key=firecracker-containerd/42/4
time="2022-05-21T12:35:40.265315476Z" level=debug msg="removed snapshot" key=firecracker-containerd/42/4 snapshotter=devmapper
time="2022-05-21T12:35:40.265379196Z" level=debug msg=remove key=firecracker-containerd/28/5
time="2022-05-21T12:35:40.384238663Z" level=debug msg="removed snapshot" key=firecracker-containerd/28/5 snapshotter=devmapper
time="2022-05-21T12:35:40.384263490Z" level=debug msg=cleanup
time="2022-05-21T12:35:40.384276064Z" level=debug msg="snapshot garbage collected" d=229.16377ms snapshotter=devmapper
time="2022-05-21T12:35:40.384304889Z" level=debug msg="garbage collected" d="768.834µs"
time="2022-05-21T12:35:40.533405823Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:35:40.534026236Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:35:40.594976715Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:35:40.595667792Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:35:42.073176402Z" level=error msg="error closing io stream" ExecID= TaskID=1 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=1
time="2022-05-21T12:35:42.073200047Z" level=error msg="error closing io stream" ExecID= TaskID=1 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=1
time="2022-05-21T12:35:42.082455386Z" level=error msg="error closing io stream" ExecID= TaskID=3 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=3
time="2022-05-21T12:35:42.082481695Z" level=error msg="error closing io stream" ExecID= TaskID=3 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=3
time="2022-05-21T12:35:42.641916005Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 1 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/1 delete" error="exit status 1"
time="2022-05-21T12:35:42.642036283Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=1 namespace=firecracker-containerd
time="2022-05-21T12:35:42.643382057Z" level=debug msg="remove snapshot" key=1
time="2022-05-21T12:35:42.643884527Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:35:42.644369685Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:35:42.644636419Z" level=debug msg="stop VM: VMID:\"1\" "
time="2022-05-21T12:35:42.644896030Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=1
time="2022-05-21T12:35:42.645825477Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 3 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/3 delete" error="exit status 1"
time="2022-05-21T12:35:42.645872436Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=3 namespace=firecracker-containerd
time="2022-05-21T12:35:42.647077524Z" level=debug msg="remove snapshot" key=3
time="2022-05-21T12:35:42.647643103Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:35:42.648095619Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:35:42.648346724Z" level=debug msg="stop VM: VMID:\"3\" "
time="2022-05-21T12:35:42.648660918Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=3
time="2022-05-21T12:35:42.679480228Z" level=debug msg="schedule snapshotter cleanup" snapshotter=devmapper
time="2022-05-21T12:35:42.679547655Z" level=debug msg=walk
time="2022-05-21T12:35:42.679798159Z" level=debug msg=remove key=firecracker-containerd/43/1
time="2022-05-21T12:35:42.781310628Z" level=debug msg="removed snapshot" key=firecracker-containerd/43/1 snapshotter=devmapper
time="2022-05-21T12:35:42.781358118Z" level=debug msg=remove key=firecracker-containerd/22/3
time="2022-05-21T12:35:42.896503142Z" level=debug msg="removed snapshot" key=firecracker-containerd/22/3 snapshotter=devmapper
time="2022-05-21T12:35:42.896522358Z" level=debug msg=cleanup
time="2022-05-21T12:35:42.896532037Z" level=debug msg="snapshot garbage collected" d=217.018606ms snapshotter=devmapper
time="2022-05-21T12:35:42.896558006Z" level=debug msg="garbage collected" d="837.133µs"
time="2022-05-21T12:35:43.038832794Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:35:43.039421898Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:35:43.094980772Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:35:43.095515483Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:35:44.499752285Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=2
time="2022-05-21T12:35:44.502671604Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=2
time="2022-05-21T12:35:45.533993126Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=4
time="2022-05-21T12:35:45.536593061Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=4
time="2022-05-21T12:35:45.595866359Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=5
time="2022-05-21T12:35:45.598999713Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=5
time="2022-05-21T12:35:48.039466018Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=1
time="2022-05-21T12:35:48.042676518Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=1
time="2022-05-21T12:35:48.095532793Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=3
time="2022-05-21T12:35:48.101212842Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=3
time="2022-05-21T12:45:33.967395092Z" level=info msg=exited ExecID= TaskID=6 exit_status=137 exited_at="2022-05-21 12:45:33.95509659 +0000 UTC" runtime=aws.firecracker vmID=6
time="2022-05-21T12:45:33.967606401Z" level=info msg="connection was closed: read /proc/self/fd/14: file already closed" ExecID= TaskID=6 runtime=aws.firecracker stream=stdin vmID=6
time="2022-05-21T12:45:33.967681954Z" level=error msg="error closing io stream" ExecID= TaskID=6 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdin vmID=6
time="2022-05-21T12:45:33.967674190Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/exit type=containerd.events.TaskExit
time="2022-05-21T12:45:34.475414760Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/tasks/delete type=containerd.events.TaskDelete
time="2022-05-21T12:45:34.487400333Z" level=info msg="PatchGuestDrive successful" runtime=aws.firecracker
time="2022-05-21T12:45:34.487860152Z" level=info msg="shim disconnected" id=6
time="2022-05-21T12:45:34.487923412Z" level=warning msg="cleaning up after shim disconnected" id=6 namespace=firecracker-containerd
time="2022-05-21T12:45:34.487937459Z" level=info msg="cleaning up dead shim"
time="2022-05-21T12:45:38.967669723Z" level=error msg="error closing io stream" ExecID= TaskID=6 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stdout vmID=6
time="2022-05-21T12:45:38.967669984Z" level=error msg="error closing io stream" ExecID= TaskID=6 error="1 error occurred:\n\t* close unix @->firecracker.vsock: use of closed network connection\n\n" runtime=aws.firecracker stream=stderr vmID=6
time="2022-05-21T12:45:39.537759581Z" level=error msg="failed to delete" cmd="/usr/local/bin/containerd-shim-aws-firecracker -namespace firecracker-containerd -address /run/firecracker-containerd/containerd.sock -publish-binary /usr/local/bin/firecracker-containerd -id 6 -bundle /run/firecracker-containerd/io.containerd.runtime.v2.task/firecracker-containerd/6 delete" error="exit status 1"
time="2022-05-21T12:45:39.537904966Z" level=warning msg="failed to clean up after shim disconnected" error="aws.firecracker: rpc error: code = DeadlineExceeded desc = timed out waiting for VM start\n: exit status 1" id=6 namespace=firecracker-containerd
time="2022-05-21T12:45:39.539368444Z" level=debug msg="remove snapshot" key=6
time="2022-05-21T12:45:39.539953000Z" level=debug msg="event published" ns=firecracker-containerd topic=/snapshot/remove type=containerd.events.SnapshotRemove
time="2022-05-21T12:45:39.540717957Z" level=debug msg="event published" ns=firecracker-containerd topic=/containers/delete type=containerd.events.ContainerDelete
time="2022-05-21T12:45:39.541061777Z" level=debug msg="stop VM: VMID:\"6\" "
time="2022-05-21T12:45:39.541464309Z" level=info msg="stopping the VM" runtime=aws.firecracker vmID=6
time="2022-05-21T12:45:39.566609475Z" level=debug msg="schedule snapshotter cleanup" snapshotter=devmapper
time="2022-05-21T12:45:39.566676362Z" level=debug msg=walk
time="2022-05-21T12:45:39.566867153Z" level=debug msg=remove key=firecracker-containerd/44/6
time="2022-05-21T12:45:39.665208864Z" level=debug msg="removed snapshot" key=firecracker-containerd/44/6 snapshotter=devmapper
time="2022-05-21T12:45:39.665227930Z" level=debug msg=cleanup
time="2022-05-21T12:45:39.665235825Z" level=debug msg="snapshot garbage collected" d=98.590682ms snapshotter=devmapper
time="2022-05-21T12:45:39.665255131Z" level=debug msg="garbage collected" d="699.814µs"
time="2022-05-21T12:45:39.949359456Z" level=info msg="firecracker exited: status=0" runtime=aws.firecracker
time="2022-05-21T12:45:39.950229812Z" level=debug msg="event forwarded" ns=firecracker-containerd topic=/firecracker-vm/stop type=VMStop
time="2022-05-21T12:45:44.950425067Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=6
time="2022-05-21T12:45:44.953747532Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=6
ustiugov commented 2 years ago

the logs don't show any problems. what does kubectl get pods show? please reproduce the issue with just helloworld function

aditya2803 commented 2 years ago

Hi @ustiugov, the output of kubectl get pods is No resources found in default namespace.

The output of kubectl get pods -A is:

NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
istio-system       cluster-local-gateway-74c4558686-7w54r     1/1     Running   0          15m
istio-system       istio-ingressgateway-f5b59cc7c-jb6dm       1/1     Running   0          15m
istio-system       istiod-54bbfb4d85-4ldbk                    1/1     Running   0          15m
knative-eventing   eventing-controller-59475d565c-79xg9       1/1     Running   0          15m
knative-eventing   eventing-webhook-74cbb75cb-nrrx6           1/1     Running   0          15m
knative-eventing   imc-controller-84c7f75c67-45jg6            1/1     Running   0          15m
knative-eventing   imc-dispatcher-7786967556-ld7rw            1/1     Running   0          15m
knative-eventing   mt-broker-controller-65bb965bf9-64fq6      1/1     Running   0          15m
knative-eventing   mt-broker-filter-8496c9765-w925x           1/1     Running   0          15m
knative-eventing   mt-broker-ingress-67959dc68f-4fp5x         1/1     Running   0          15m
knative-serving    activator-7f7865c9f5-7f8mf                 1/1     Running   0          15m
knative-serving    autoscaler-5f795f4cb7-ljpn6                1/1     Running   0          15m
knative-serving    controller-5b7545f6f5-8sd7f                1/1     Running   0          15m
knative-serving    default-domain-m5jmz                       1/1     Running   0          15m
knative-serving    domain-mapping-9f9784f9b-pwhlb             1/1     Running   0          15m
knative-serving    domainmapping-webhook-67896589f6-m6c4q     1/1     Running   0          15m
knative-serving    net-istio-controller-6b84bc75d6-nxszc      1/1     Running   0          15m
knative-serving    net-istio-webhook-f96dbffb4-j7w5q          1/1     Running   0          15m
knative-serving    webhook-557f4b554d-xrmpp                   1/1     Running   0          15m
kube-system        calico-kube-controllers-644b84fc59-57d2z   1/1     Running   0          15m
kube-system        canal-tjcsv                                2/2     Running   0          15m
kube-system        coredns-64897985d-mmpp7                    1/1     Running   0          15m
kube-system        coredns-64897985d-n4qzt                    1/1     Running   0          15m
kube-system        etcd                                1/1     Running   2          16m
kube-system        kube-apiserver               1/1     Running   2          16m
kube-system        kube-controller-manager      1/1     Running   16         15m
kube-system        kube-proxy-zx2tn                           1/1     Running   0          15m
kube-system        kube-scheduler               1/1     Running   29         15m
metallb-system     controller-557988499-fgdhz                 1/1     Running   0          15m
metallb-system     speaker-n8cqs                              1/1     Running   0          15m
registry           docker-registry-pod-p4b6h                  1/1     Running   0          15m
registry           registry-etc-hosts-update-7s4hx            1/1     Running   0          15m
ustiugov commented 2 years ago

kubectl get pods -A shows no deployed functions.

aditya2803 commented 2 years ago

Yes, I presume that's because of no ready revisions. But I am not sure about why that it happening.

Output of kn services list:

NAME           URL                                       LATEST   AGE   CONDITIONS   READY   REASON
helloworld-0   http://helloworld-0.default.example.com            19m   0 OK / 3     False   RevisionMissing : Configuration "helloworld-0" does not have any ready Revision.

Output of kubectl get revisions

NAME                 CONFIG NAME    K8S SERVICE NAME   GENERATION   READY   REASON                     ACTUAL REPLICAS   DESIRED REPLICAS
helloworld-0-00001   helloworld-0                      1            False   ProgressDeadlineExceeded   0

Output of kubectl describe revision/helloworld-0-00001

Name:         helloworld-0-00001
Namespace:    default
Labels:       serving.knative.dev/configuration=helloworld-0
              serving.knative.dev/configurationGeneration=1
              serving.knative.dev/configurationUID=1cbbfe51-92c5-4cf0-a91f-a5d0fb28859c
              serving.knative.dev/routingState=active
              serving.knative.dev/service=helloworld-0
              serving.knative.dev/serviceUID=20b5d456-e512-4c66-8a16-b2509f56e7b7
Annotations:  autoscaling.knative.dev/target: 1
              serving.knative.dev/creator: kubernetes-admin
              serving.knative.dev/routes: helloworld-0
              serving.knative.dev/routingStateModified: 2022-05-21T13:36:46Z
API Version:  serving.knative.dev/v1
Kind:         Revision
Metadata:
  Creation Timestamp:  2022-05-21T13:36:46Z
  Generation:          1
  Managed Fields:
    API Version:  serving.knative.dev/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:autoscaling.knative.dev/target:
          f:serving.knative.dev/creator:
          f:serving.knative.dev/routes:
          f:serving.knative.dev/routingStateModified:
        f:labels:
          .:
          f:serving.knative.dev/configuration:
          f:serving.knative.dev/configurationGeneration:
          f:serving.knative.dev/configurationUID:
          f:serving.knative.dev/routingState:
          f:serving.knative.dev/service:
          f:serving.knative.dev/serviceUID:
        f:ownerReferences:
          .:
          k:{"uid":"1cbbfe51-92c5-4cf0-a91f-a5d0fb28859c"}:
      f:spec:
        .:
        f:containerConcurrency:
        f:containers:
        f:enableServiceLinks:
        f:timeoutSeconds:
    Manager:      Go-http-client
    Operation:    Update
    Time:         2022-05-21T13:36:46Z
    API Version:  serving.knative.dev/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:actualReplicas:
        f:conditions:
        f:containerStatuses:
        f:observedGeneration:
    Manager:      Go-http-client
    Operation:    Update
    Subresource:  status
    Time:         2022-05-21T13:47:16Z
  Owner References:
    API Version:           serving.knative.dev/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Configuration
    Name:                  helloworld-0
    UID:                   1cbbfe51-92c5-4cf0-a91f-a5d0fb28859c
  Resource Version:        7100
  UID:                     608c7b88-edc1-4492-9bf8-fe231d47f7ad
Spec:
  Container Concurrency:  0
  Containers:
    Env:
      Name:   GUEST_PORT
      Value:  50051
      Name:   GUEST_IMAGE
      Value:  ghcr.io/ease-lab/helloworld:var_workload
    Image:    crccheck/hello-world:latest
    Name:     user-container
    Ports:
      Container Port:  50051
      Name:            h2c
      Protocol:        TCP
    Readiness Probe:
      Success Threshold:  1
      Tcp Socket:
        Port:  0
    Resources:
  Enable Service Links:  false
  Timeout Seconds:       300
Status:
  Actual Replicas:  0
  Conditions:
    Last Transition Time:  2022-05-21T13:47:16Z
    Message:               The target is not receiving traffic.
    Reason:                NoTraffic
    Severity:              Info
    Status:                False
    Type:                  Active
    Last Transition Time:  2022-05-21T13:46:47Z
    Message:               Container failed with: container exited with no error
    Reason:                ExitCode0
    Status:                False
    Type:                  ContainerHealthy
    Last Transition Time:  2022-05-21T13:47:16Z
    Message:               Initial scale was never achieved
    Reason:                ProgressDeadlineExceeded
    Status:                False
    Type:                  Ready
    Last Transition Time:  2022-05-21T13:47:16Z
    Message:               Initial scale was never achieved
    Reason:                ProgressDeadlineExceeded
    Status:                False
    Type:                  ResourcesAvailable
  Container Statuses:
    Name:               user-container
  Observed Generation:  1
Events:
  Type     Reason         Age    From                 Message
  ----     ------         ----   ----                 -------
  Warning  InternalError  8m18s  revision-controller  failed to update deployment "helloworld-0-00001-deployment": Operation cannot be fulfilled on deployments.apps "helloworld-0-00001-deployment": the object has been modified; please apply your changes to the latest version and try again
ustiugov commented 2 years ago

after you started using the other branch, have you started with a new clean node or kept using the old one?

aditya2803 commented 2 years ago

I have been using the old node. But I have cleared all previous files (starting with an empty filesystem), and cloned the new branch, then started the process.

ustiugov commented 2 years ago

I suggest using a fresh node

aditya2803 commented 2 years ago

Sure, I'll try that. That may take a few days however. Do you propose any way to clean up the current node in a way to use it for the fresh branch ?

aditya2803 commented 2 years ago

Hi @ustiugov, I tried using a fresh AWS ec2 instance running AMD and Ubuntu 20.04 for this. I used the new branch (#465) and also applied the change of (#481) locally. However, I ran into the exactly the same issue once again (same output for kubectl get pods, kubectl get revisions etc).

vhive.stdout

time="2022-05-22T06:48:17.906469735Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-22T06:48:18.182771404Z" level=error msg="VM config for pod a816c49fc0c057c63746107ce10b2a119973b3536f6dd4c0bfe0d29d5fac762e does not exist"
time="2022-05-22T06:48:18.182800714Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:18.198215955Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unkno
wn desc = failed to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF" image="vhiveease/rnn_serving:var_workload" vmID=165
time="2022-05-22T06:48:18.198257295Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = fa
iled to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF"
time="2022-05-22T06:48:18.204791658Z" level=error msg="VM config for pod 2c9008a36f3f462efaebef6e22905a6fb1d7fc21f52b67124c2e43bb410c1c33 does not exist"
time="2022-05-22T06:48:18.204810928Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:28.178727997Z" level=error msg="VM config for pod 93d79475aa67f0fc707f0b5e554185f8a1bc804b47a7427ba590c6e295b319a1 does not exist"
time="2022-05-22T06:48:28.178749627Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:29.178473676Z" level=error msg="VM config for pod dd3b7e9755d55deadd08aab43b9ebc631c9694530a237adf53446796d6a13a91 does not exist"
time="2022-05-22T06:48:29.178500227Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:29.949906752Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you s
pecify the right host or port?\n\n"
time="2022-05-22T06:48:29.949939242Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-22T06:48:30.180305088Z" level=error msg="VM config for pod a816c49fc0c057c63746107ce10b2a119973b3536f6dd4c0bfe0d29d5fac762e does not exist"
time="2022-05-22T06:48:30.180337768Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:30.258215038Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unkno
wn desc = failed to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF" image="ghcr.io/ease-lab/pyaes:var_workload" vmID=166
time="2022-05-22T06:48:30.258258959Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = fa
iled to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF"
time="2022-05-22T06:48:30.260811597Z" level=error msg="VM config for pod 8b136b626a162d8954c178f9052b9cd164c69754058388eb9eec8cc5d656ac30 does not exist"
time="2022-05-22T06:48:30.260828387Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:33.183440545Z" level=error msg="VM config for pod 2c9008a36f3f462efaebef6e22905a6fb1d7fc21f52b67124c2e43bb410c1c33 does not exist"
time="2022-05-22T06:48:33.183504665Z" level=error error="VM config for pod does not exist"
time="2022-05-22T06:48:37.477829630Z" level=info msg="HEARTBEAT: number of active VMs: 0"
time="2022-05-22T06:48:37.486898090Z" level=info msg="FuncPool heartbeat: ==== Stats by cold functions ====\nfID, #started, #served\n==================================="
time="2022-05-22T06:48:39.880526375Z" level=warning msg="Failed to Fetch k8s dns clusterIP exit status 1\nThe connection to the server localhost:8080 was refused - did you s
pecify the right host or port?\n\n"
time="2022-05-22T06:48:39.880558325Z" level=warning msg="Using google dns 8.8.8.8\n"
time="2022-05-22T06:48:40.182251734Z" level=error msg="coordinator failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unkno
wn desc = failed to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF" image="ghcr.io/ease-lab/helloworld:var_workload" vmID=167
time="2022-05-22T06:48:40.182304234Z" level=error msg="failed to start VM" error="failed to create the microVM in firecracker-containerd: rpc error: code = Unknown desc = fa
iled to create VM: failed to start the VM: Put \"http://localhost/actions\": EOF"
time="2022-05-22T06:48:40.185197387Z" level=error msg="VM config for pod 93d79475aa67f0fc707f0b5e554185f8a1bc804b47a7427ba590c6e295b319a1 does not exist"
time="2022-05-22T06:48:40.185220167Z" level=error error="VM config for pod does not exist"

firecracker.stderr

M(), setting up a VMM on firecracker.sock" runtime=aws.firecracker
time="2022-05-22T06:49:32.970316862Z" level=info msg="refreshMachineConfiguration: [GET /machine-config][200] getMachineConfigurationOK  &{CPUTemplate: HtEnabled:0xc00053e5e
3 MemSizeMib:0xc00053e588 TrackDirtyPages:false VcpuCount:0xc00053e580}" runtime=aws.firecracker
time="2022-05-22T06:49:32.970531724Z" level=info msg="PutGuestBootSource: [PUT /boot-source][204] putGuestBootSourceNoContent " runtime=aws.firecracker
time="2022-05-22T06:49:32.970552044Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/runtime/default-rootfs.img, slot root_drive, root true." runtime=aws.fi
recracker
time="2022-05-22T06:49:32.970900508Z" level=info msg="Attached drive /var/lib/firecracker-containerd/runtime/default-rootfs.img: [PUT /drives/{drive_id}][204] putGuestDriveB
yIdNoContent " runtime=aws.firecracker
time="2022-05-22T06:49:32.970920308Z" level=info msg="Attaching drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#171/ctrstub0, slot MN2HE43UOVRDA, root
false." runtime=aws.firecracker
time="2022-05-22T06:49:32.971099140Z" level=info msg="Attached drive /var/lib/firecracker-containerd/shim-base/firecracker-containerd#171/ctrstub0: [PUT /drives/{drive_id}][
204] putGuestDriveByIdNoContent " runtime=aws.firecracker
time="2022-05-22T06:49:32.971116620Z" level=info msg="Attaching NIC 171_tap (hwaddr 02:FC:00:00:00:AA) at index 1" runtime=aws.firecracker
time="2022-05-22T06:49:33.156858045Z" level=error msg="Starting instance: Put \"http://localhost/actions\": EOF" runtime=aws.firecracker
time="2022-05-22T06:49:33.156946386Z" level=error msg="failed to create VM" error="failed to start the VM: Put \"http://localhost/actions\": EOF" runtime=aws.firecracker vmI
D=171
time="2022-05-22T06:49:33.157153618Z" level=warning msg="firecracker exited: signal: aborted (core dumped)" runtime=aws.firecracker
time="2022-05-22T06:49:33.162219583Z" level=error msg="shim CreateVM returned error" error="rpc error: code = Unknown desc = failed to create VM: failed to start the VM: Put
\"http://localhost/actions\": EOF"
time="2022-05-22T06:49:38.157281139Z" level=error msg="aws.firecracker: publisher not closed" shim_stream=stderr vmID=171
time="2022-05-22T06:49:38.158392301Z" level=debug msg="shim has been terminated" error="exit status 1" vmID=171

Let me know if you observe something, or need more detailed logs.

Note All this is one a single-node cluster

aditya2803 commented 2 years ago

Hi @ustiugov, I managed to get the problem fixed by setting up on a new machine, and by enabling KVM, and ensuring that this script worked okay.

The functions are now getting deployed properly now. However, I am having an issue with the istio set-up. This is similar to #475.

NAMESPACE          NAME                                       READY   STATUS             RESTARTS         AGE
istio-system       cluster-local-gateway-74c4558686-8g9zs     0/1     CrashLoopBackOff   41 (4m34s ago)   3h9m
istio-system       istio-ingressgateway-f5b59cc7c-qqgrr       0/1     CrashLoopBackOff   41 (4m52s ago)   3h9m

Output of kubectl describe pod cluster-local-gateway-74c4558686-8g9zs -n istio-system

Name:         cluster-local-gateway-74c4558686-8g9zs
Namespace:    istio-system
Priority:     0
Start Time:   Mon, 23 May 2022 13:50:03 +0000
Labels:       app=cluster-local-gateway
              chart=gateways
              heritage=Tiller
              install.operator.istio.io/owning-resource=unknown
              istio=cluster-local-gateway
              istio.io/rev=default
              operator.istio.io/component=IngressGateways
              pod-template-hash=74c4558686
              release=istio
              service.istio.io/canonical-name=cluster-local-gateway
              service.istio.io/canonical-revision=latest
              sidecar.istio.io/inject=false
Annotations:  cni.projectcalico.org/podIP: 192.168.0.34/32
              cni.projectcalico.org/podIPs: 192.168.0.34/32
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/inject: false
Status:       Running
IP:           192.168.0.34
IPs:
  IP:           192.168.0.34
Controlled By:  ReplicaSet/cluster-local-gateway-74c4558686
Containers:
  istio-proxy:
    Container ID:  containerd://079fe320e2704ab386383d25917b927a84a58301a26c76dc02bc09c5c3be988a
    Image:         docker.io/istio/proxyv2:1.12.5
    Image ID:      docker.io/istio/proxyv2@sha256:780f49744311374e0905e5d15a4bd251bbc48284cb653ca9d609ac3894558462
    Ports:         15020/TCP, 8080/TCP, 8443/TCP, 15090/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      proxy
      router
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 23 May 2022 17:00:15 +0000
      Finished:     Mon, 23 May 2022 17:00:16 +0000
    Ready:          False
    Restart Count:  42
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                   first-party-jwt
      PILOT_CERT_PROVIDER:          istiod
      CA_ADDR:                      istiod.istio-system.svc:15012
      NODE_NAME:                     (v1:spec.nodeName)
      POD_NAME:                     cluster-local-gateway-74c4558686-8g9zs (v1:metadata.name)
      POD_NAMESPACE:                istio-system (v1:metadata.namespace)
      INSTANCE_IP:                   (v1:status.podIP)
      HOST_IP:                       (v1:status.hostIP)
      SERVICE_ACCOUNT:               (v1:spec.serviceAccountName)
      ISTIO_META_WORKLOAD_NAME:     cluster-local-gateway
      ISTIO_META_OWNER:             kubernetes://apis/apps/v1/namespaces/istio-system/deployments/cluster-local-gateway
      ISTIO_META_MESH_ID:           cluster.local
      TRUST_DOMAIN:                 cluster.local
      ISTIO_META_UNPRIVILEGED_POD:  true
      ISTIO_META_CLUSTER_ID:        Kubernetes
    Mounts:
      /etc/istio/config from config-volume (rw)
      /etc/istio/ingressgateway-ca-certs from ingressgateway-ca-certs (ro)
      /etc/istio/ingressgateway-certs from ingressgateway-certs (ro)
      /etc/istio/pod from podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcr6n (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio
    Optional:  true
  ingressgateway-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio-ingressgateway-certs
    Optional:    true
  ingressgateway-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio-ingressgateway-ca-certs
    Optional:    true
  kube-api-access-tcr6n:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Warning  BackOff  12m (x865 over 3h12m)   kubelet  Back-off restarting failed container
  Normal   Pulled   2m40s (x43 over 3h12m)  kubelet  Container image "docker.io/istio/proxyv2:1.12.5" already present on machine

Output of kubectl logs cluster-local-gateway-74c4558686-8g9zs -n istio-system

2022-05-23T17:00:15.709727Z     info    FLAG: --concurrency="0"
2022-05-23T17:00:15.709824Z     info    FLAG: --domain="istio-system.svc.cluster.local"
2022-05-23T17:00:15.709833Z     info    FLAG: --help="false"
2022-05-23T17:00:15.709838Z     info    FLAG: --log_as_json="false"
2022-05-23T17:00:15.709842Z     info    FLAG: --log_caller=""
2022-05-23T17:00:15.709846Z     info    FLAG: --log_output_level="default:info"
2022-05-23T17:00:15.709850Z     info    FLAG: --log_rotate=""
2022-05-23T17:00:15.709853Z     info    FLAG: --log_rotate_max_age="30"
2022-05-23T17:00:15.709857Z     info    FLAG: --log_rotate_max_backups="1000"
2022-05-23T17:00:15.709861Z     info    FLAG: --log_rotate_max_size="104857600"
2022-05-23T17:00:15.709866Z     info    FLAG: --log_stacktrace_level="default:none"
2022-05-23T17:00:15.709873Z     info    FLAG: --log_target="[stdout]"
2022-05-23T17:00:15.709878Z     info    FLAG: --meshConfig="./etc/istio/config/mesh"
2022-05-23T17:00:15.709882Z     info    FLAG: --outlierLogPath=""
2022-05-23T17:00:15.709886Z     info    FLAG: --proxyComponentLogLevel="misc:error"
2022-05-23T17:00:15.709890Z     info    FLAG: --proxyLogLevel="warning"
2022-05-23T17:00:15.709895Z     info    FLAG: --serviceCluster="istio-proxy"
2022-05-23T17:00:15.709899Z     info    FLAG: --stsPort="0"
2022-05-23T17:00:15.709903Z     info    FLAG: --templateFile=""
2022-05-23T17:00:15.709908Z     info    FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2022-05-23T17:00:15.709913Z     info    FLAG: --vklog="0"
2022-05-23T17:00:15.709918Z     info    Version 1.12.5-6332f0901f96ca97cf114d57b466d4bcd055b08c-Clean
2022-05-23T17:00:15.710614Z     info    Proxy role      ips=[192.168.0.34 fe80::82b:46ff:feca:51d3] type=router id=cluster-local-gateway-74c4558686-8g9zs.istio-system domain=istio-system.svc.cluster.local
2022-05-23T17:00:15.710740Z     info    Apply mesh config from file defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  proxyMetadata: {}
  tracing:
    zipkin:
      address: zipkin.istio-system:9411
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
2022-05-23T17:00:15.712776Z     info    Effective config: binaryPath: /usr/local/bin/envoy
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata: {}
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2022-05-23T17:00:15.712806Z     info    JWT policy is first-party-jwt
2022-05-23T17:00:15.718815Z     info    CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2022-05-23T17:00:15.718859Z     info    Opening status port 15020
2022-05-23T17:00:15.718911Z     info    Using CA istiod.istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2022-05-23T17:00:15.719073Z     info    citadelclient   Citadel client using custom root cert: istiod.istio-system.svc:15012
2022-05-23T17:00:15.741484Z     info    ads     All caches have been synced up in 35.750914ms, marking server ready
2022-05-23T17:00:15.741820Z     info    sds     SDS server for workload certificates started, listening on "etc/istio/proxy/SDS"
2022-05-23T17:00:15.741850Z     info    xdsproxy        Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2022-05-23T17:00:15.741944Z     info    sds     Starting SDS grpc server
2022-05-23T17:00:15.742273Z     info    Pilot SAN: [istiod.istio-system.svc]
2022-05-23T17:00:15.742287Z     info    starting Http service at 127.0.0.1:15004
2022-05-23T17:00:15.743775Z     info    Pilot SAN: [istiod.istio-system.svc]
2022-05-23T17:00:15.745484Z     info    Starting proxy agent
2022-05-23T17:00:15.745533Z     info    Epoch 0 starting
2022-05-23T17:00:15.745555Z     info    Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ       %l      envoy %n     %v -l warning --component-log-level misc:error]
[warn] evutil_make_internal_pipe_: pipe: Too many open files
[warn] event_base_new_with_config: Unable to make base notifiable.
2022-05-23T17:00:15.959603Z     critical        envoy assert    assert failure: event_base != nullptr. Details: Failed to initialize libevent event_base
2022-05-23T17:00:15.959717Z     critical        envoy backtrace Caught Aborted, suspect faulting address 0x5390000002c
2022-05-23T17:00:15.959725Z     critical        envoy backtrace Backtrace (use tools/stack_decode.py to get line numbers):
2022-05-23T17:00:15.959729Z     critical        envoy backtrace Envoy version: eb9f894bff4c135904eb83513c795db899c838d1/1.20.3-dev/Clean/RELEASE/BoringSSL
2022-05-23T17:00:15.959989Z     critical        envoy backtrace #0: __restore_rt [0x7f328064f3c0]
2022-05-23T17:00:15.968871Z     critical        envoy backtrace #1: Envoy::Event::DispatcherImpl::DispatcherImpl() [0x55f8ff655fc1]
2022-05-23T17:00:15.975578Z     critical        envoy backtrace #2: Envoy::Api::Impl::allocateDispatcher() [0x55f8ff07a740]
2022-05-23T17:00:15.976570Z     info    cache   generated new workload certificate      latency=234.644688ms ttl=23h59m59.023453911s
2022-05-23T17:00:15.976604Z     info    cache   Root cert has changed, start rotating root cert
2022-05-23T17:00:15.976639Z     info    ads     XDS: Incremental Pushing:0 ConnectedEndpoints:0 Version:
2022-05-23T17:00:15.976716Z     info    cache   returned workload trust anchor from cache       ttl=23h59m59.023288508s
2022-05-23T17:00:15.979510Z     critical        envoy backtrace #3: Envoy::Server::ProdWorkerFactory::createWorker() [0x55f8ff075566]
2022-05-23T17:00:15.983446Z     critical        envoy backtrace #4: Envoy::Server::ListenerManagerImpl::ListenerManagerImpl() [0x55f8ff36dd05]
2022-05-23T17:00:15.987376Z     critical        envoy backtrace #5: Envoy::Server::InstanceImpl::initialize() [0x55f8ff05bbda]
2022-05-23T17:00:15.991313Z     critical        envoy backtrace #6: Envoy::Server::InstanceImpl::InstanceImpl() [0x55f8ff0578e4]
2022-05-23T17:00:15.995219Z     critical        envoy backtrace #7: std::__1::make_unique<>() [0x55f8fd698af4]
2022-05-23T17:00:15.999153Z     critical        envoy backtrace #8: Envoy::MainCommonBase::MainCommonBase() [0x55f8fd697e59]
2022-05-23T17:00:16.003037Z     critical        envoy backtrace #9: Envoy::MainCommon::MainCommon() [0x55f8fd699507]
2022-05-23T17:00:16.006932Z     critical        envoy backtrace #10: Envoy::MainCommon::main() [0x55f8fd69969c]
2022-05-23T17:00:16.010816Z     critical        envoy backtrace #11: main [0x55f8fd69590c]
2022-05-23T17:00:16.010876Z     critical        envoy backtrace #12: __libc_start_main [0x7f328046d0b3]
2022-05-23T17:00:16.163848Z     error   Epoch 0 exited with error: signal: aborted (core dumped)
2022-05-23T17:00:16.163874Z     info    No more active epochs, terminating

Output of kubectl logs istio-ingressgateway-f5b59cc7c-qqgrr -n istio-system

2022-05-23T17:05:05.697445Z     info    FLAG: --concurrency="0"
2022-05-23T17:05:05.697522Z     info    FLAG: --domain="istio-system.svc.cluster.local"
2022-05-23T17:05:05.697529Z     info    FLAG: --help="false"
2022-05-23T17:05:05.697535Z     info    FLAG: --log_as_json="false"
2022-05-23T17:05:05.697539Z     info    FLAG: --log_caller=""
2022-05-23T17:05:05.697543Z     info    FLAG: --log_output_level="default:info"
2022-05-23T17:05:05.697547Z     info    FLAG: --log_rotate=""
2022-05-23T17:05:05.697554Z     info    FLAG: --log_rotate_max_age="30"
2022-05-23T17:05:05.697558Z     info    FLAG: --log_rotate_max_backups="1000"
2022-05-23T17:05:05.697562Z     info    FLAG: --log_rotate_max_size="104857600"
2022-05-23T17:05:05.697567Z     info    FLAG: --log_stacktrace_level="default:none"
2022-05-23T17:05:05.697581Z     info    FLAG: --log_target="[stdout]"
2022-05-23T17:05:05.697589Z     info    FLAG: --meshConfig="./etc/istio/config/mesh"
2022-05-23T17:05:05.697593Z     info    FLAG: --outlierLogPath=""
2022-05-23T17:05:05.697599Z     info    FLAG: --proxyComponentLogLevel="misc:error"
2022-05-23T17:05:05.697603Z     info    FLAG: --proxyLogLevel="warning"
2022-05-23T17:05:05.697609Z     info    FLAG: --serviceCluster="istio-proxy"
2022-05-23T17:05:05.697614Z     info    FLAG: --stsPort="0"
2022-05-23T17:05:05.697618Z     info    FLAG: --templateFile=""
2022-05-23T17:05:05.697623Z     info    FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2022-05-23T17:05:05.697633Z     info    FLAG: --vklog="0"
2022-05-23T17:05:05.697638Z     info    Version 1.12.5-6332f0901f96ca97cf114d57b466d4bcd055b08c-Clean
2022-05-23T17:05:05.698147Z     info    Proxy role      ips=[192.168.0.36 fe80::c0e4:b1ff:fe16:c34d] type=router id=istio-ingressgateway-f5b59cc7c-qqgrr.istio-system domain=istio-system.svc.cluster.local
2022-05-23T17:05:05.698236Z     info    Apply mesh config from file defaultConfig:
  discoveryAddress: istiod.istio-system.svc:15012
  proxyMetadata: {}
  tracing:
    zipkin:
      address: zipkin.istio-system:9411
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
2022-05-23T17:05:05.700145Z     info    Effective config: binaryPath: /usr/local/bin/envoy
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata: {}
serviceCluster: istio-proxy
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
  zipkin:
    address: zipkin.istio-system:9411

2022-05-23T17:05:05.700165Z     info    JWT policy is first-party-jwt
2022-05-23T17:05:05.709326Z     info    CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2022-05-23T17:05:05.709363Z     info    Using CA istiod.istio-system.svc:15012 cert with certs: var/run/secrets/istio/root-cert.pem
2022-05-23T17:05:05.709380Z     info    Opening status port 15020
2022-05-23T17:05:05.709476Z     info    citadelclient   Citadel client using custom root cert: istiod.istio-system.svc:15012
2022-05-23T17:05:05.731186Z     info    ads     All caches have been synced up in 37.532188ms, marking server ready
2022-05-23T17:05:05.731633Z     info    sds     SDS server for workload certificates started, listening on "etc/istio/proxy/SDS"
2022-05-23T17:05:05.731660Z     info    xdsproxy        Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "Kubernetes"
2022-05-23T17:05:05.731736Z     info    sds     Starting SDS grpc server
2022-05-23T17:05:05.732257Z     info    Pilot SAN: [istiod.istio-system.svc]
2022-05-23T17:05:05.732359Z     info    starting Http service at 127.0.0.1:15004
2022-05-23T17:05:05.733798Z     info    Pilot SAN: [istiod.istio-system.svc]
2022-05-23T17:05:05.735114Z     info    Starting proxy agent
2022-05-23T17:05:05.735138Z     info    Epoch 0 starting
2022-05-23T17:05:05.735154Z     info    Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ       %l      envoy %n     %v -l warning --component-log-level misc:error]
[warn] evutil_make_internal_pipe_: pipe: Too many open files
[warn] event_base_new_with_config: Unable to make base notifiable.
2022-05-23T17:05:05.923436Z     critical        envoy assert    assert failure: event_base != nullptr. Details: Failed to initialize libevent event_base
2022-05-23T17:05:05.923489Z     critical        envoy backtrace Caught Aborted, suspect faulting address 0x53900000027
2022-05-23T17:05:05.923531Z     critical        envoy backtrace Backtrace (use tools/stack_decode.py to get line numbers):
2022-05-23T17:05:05.923535Z     critical        envoy backtrace Envoy version: eb9f894bff4c135904eb83513c795db899c838d1/1.20.3-dev/Clean/RELEASE/BoringSSL
2022-05-23T17:05:05.923785Z     critical        envoy backtrace #0: __restore_rt [0x7eff08eaf3c0]
2022-05-23T17:05:05.932570Z     critical        envoy backtrace #1: Envoy::Event::DispatcherImpl::DispatcherImpl() [0x55f9fadb0fc1]
2022-05-23T17:05:05.939399Z     critical        envoy backtrace #2: Envoy::Api::Impl::allocateDispatcher() [0x55f9fa7d5740]
2022-05-23T17:05:05.943302Z     critical        envoy backtrace #3: Envoy::Server::ProdWorkerFactory::createWorker() [0x55f9fa7d0566]
2022-05-23T17:05:05.947208Z     critical        envoy backtrace #4: Envoy::Server::ListenerManagerImpl::ListenerManagerImpl() [0x55f9faac8d05]
2022-05-23T17:05:05.951100Z     critical        envoy backtrace #5: Envoy::Server::InstanceImpl::initialize() [0x55f9fa7b6bda]
2022-05-23T17:05:05.954994Z     critical        envoy backtrace #6: Envoy::Server::InstanceImpl::InstanceImpl() [0x55f9fa7b28e4]
2022-05-23T17:05:05.958846Z     critical        envoy backtrace #7: std::__1::make_unique<>() [0x55f9f8df3af4]
2022-05-23T17:05:05.962695Z     critical        envoy backtrace #8: Envoy::MainCommonBase::MainCommonBase() [0x55f9f8df2e59]
2022-05-23T17:05:05.966553Z     critical        envoy backtrace #9: Envoy::MainCommon::MainCommon() [0x55f9f8df4507]
2022-05-23T17:05:05.970403Z     critical        envoy backtrace #10: Envoy::MainCommon::main() [0x55f9f8df469c]
2022-05-23T17:05:05.974252Z     critical        envoy backtrace #11: main [0x55f9f8df090c]
2022-05-23T17:05:05.974308Z     critical        envoy backtrace #12: __libc_start_main [0x7eff08ccd0b3]
2022-05-23T17:05:06.128876Z     error   Epoch 0 exited with error: signal: aborted (core dumped)
2022-05-23T17:05:06.129021Z     info    No more active epochs, terminating

Output of kubectl describe pod istio-ingressgateway-f5b59cc7c-qqgrr -n istio-system

Name:         istio-ingressgateway-f5b59cc7c-qqgrr
Namespace:    istio-system
Priority:     0
Start Time:   Mon, 23 May 2022 13:50:03 +0000
Labels:       app=istio-ingressgateway
              chart=gateways
              heritage=Tiller
              install.operator.istio.io/owning-resource=unknown
              istio=ingressgateway
              istio.io/rev=default
              operator.istio.io/component=IngressGateways
              pod-template-hash=f5b59cc7c
              release=istio
              service.istio.io/canonical-name=istio-ingressgateway
              service.istio.io/canonical-revision=latest
              sidecar.istio.io/inject=false
Annotations:  cni.projectcalico.org/podIP: 192.168.0.36/32
              cni.projectcalico.org/podIPs: 192.168.0.36/32
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/inject: false
Status:       Running
IP:           192.168.0.36
IPs:
  IP:           192.168.0.36
Controlled By:  ReplicaSet/istio-ingressgateway-f5b59cc7c
Containers:
  istio-proxy:
    Container ID:  containerd://d64d7ee4594da3fe65ba4775e590f5244323eda1cbfc839cfcda93e6935e057e
    Image:         docker.io/istio/proxyv2:1.12.5
    Image ID:      docker.io/istio/proxyv2@sha256:780f49744311374e0905e5d15a4bd251bbc48284cb653ca9d609ac3894558462
    Ports:         15021/TCP, 8080/TCP, 8443/TCP, 15090/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      proxy
      router
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 23 May 2022 17:05:05 +0000
      Finished:     Mon, 23 May 2022 17:05:06 +0000
    Ready:          False
    Restart Count:  43
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                   first-party-jwt
      PILOT_CERT_PROVIDER:          istiod
      CA_ADDR:                      istiod.istio-system.svc:15012
      NODE_NAME:                     (v1:spec.nodeName)
      POD_NAME:                     istio-ingressgateway-f5b59cc7c-qqgrr (v1:metadata.name)
      POD_NAMESPACE:                istio-system (v1:metadata.namespace)
      INSTANCE_IP:                   (v1:status.podIP)
      HOST_IP:                       (v1:status.hostIP)
      SERVICE_ACCOUNT:               (v1:spec.serviceAccountName)
      ISTIO_META_WORKLOAD_NAME:     istio-ingressgateway
      ISTIO_META_OWNER:             kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
      ISTIO_META_MESH_ID:           cluster.local
      TRUST_DOMAIN:                 cluster.local
      ISTIO_META_UNPRIVILEGED_POD:  true
      ISTIO_META_CLUSTER_ID:        Kubernetes
    Mounts:
      /etc/istio/config from config-volume (rw)
      /etc/istio/ingressgateway-ca-certs from ingressgateway-ca-certs (ro)
      /etc/istio/ingressgateway-certs from ingressgateway-certs (ro)
      /etc/istio/pod from podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m2xw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio
    Optional:  true
  ingressgateway-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio-ingressgateway-certs
    Optional:    true
  ingressgateway-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio-ingressgateway-ca-certs
    Optional:    true
  kube-api-access-4m2xw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                      From     Message
  ----     ------   ----                     ----     -------
  Warning  BackOff  6m11s (x915 over 3h16m)  kubelet  Back-off restarting failed container
  Normal   Pulled   74s (x44 over 3h16m)     kubelet  Container image "docker.io/istio/proxyv2:1.12.5" already present on machine

Istio Error during set-up

- Processing resources for Istio core.
✔ Istio core installed
- Processing resources for Istiod.
- Processing resources for Istiod. Waiting for Deployment/istio-system/istiod
✔ Istiod installed
- Processing resources for Ingress gateways.
- Processing resources for Ingress gateways. Waiting for Deployment/istio-system/cluster-local-gateway, Deployment/istio-system/istio-ingressgateway
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
  Deployment/istio-system/cluster-local-gateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=cluster-local-gateway-74c4558686-ncbjb_istio-system(67dc64df-0d90-4d43-aa1e-e4ed458f1f90))
  Deployment/istio-system/istio-ingressgateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=istio-ingressgateway-f5b59cc7c-bj9mc_istio-system(06717413-71bf-409a-b6e0-98309676c0c3))
- Pruning removed resourcesError: failed to install manifests: errors occurred during operation
ustiugov commented 2 years ago

@aditya2803 ok, pre-requisites is a good catch. I don't quite get how functions can be deployed properly without istio installation being successful. Can you collect a complete log of the bash scripts that set up the cluster?

aditya2803 commented 2 years ago

Here it is:

//KVM
$ [ -r /dev/kvm ] && [ -w /dev/kvm ] && echo "OK" || echo "FAIL"
$ sudo apt -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon qemu qemu-kvm
$ kvm-ok
$ err=""; [ "$(uname) $(uname -m)" = "Linux x86_64" ]   || err="ERROR: your system is not Linux x86_64."; [ -r /dev/kvm ] && [ -w /dev/kvm ]   || err="$err\nERROR: /dev/kvm is innaccessible."; (( $(uname -r | cut -d. -f1)*1000 + $(uname -r | cut -d. -f2) >= 4014 ))   || err="$err\nERROR: your kernel version ($(uname -r)) is too old."; dmesg | grep -i "hypervisor detected"   && echo "WARNING: you are running in a virtual machine. Firecracker is not well tested under nested virtualization."; [ -z "$err" ] && echo "Your system looks ready for Firecracker!" || echo -e "$err"
$ ls -al /dev/kvm
$ grep kvm /etc/group
$ sudo adduser aditya kvm
$ grep kvm /etc/group

//GIT
$ gh repo clone ease-lab/vhive
$ cd vhive
$ gh pr checkout 465

//CLUSTER
$ ./scripts/cloudlab/setup_node.sh > >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stdout) 2> >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stderr >&2)
$ sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)"
$ sleep 5;
$ sudo PATH=$PATH screen -dmS firecracker bash -c "/usr/local/bin/firecracker-containerd --config /etc/firecracker-containerd/config.toml > >(tee -a /tmp/vhive-logs/firecracker.stdout) 2> >(tee -a /tmp/vhive-logs/firecracker.stderr >&2)"
$ sleep 5;
$ source /etc/profile && go build
$ sudo screen -dmS vhive bash -c "./vhive > >(tee -a /tmp/vhive-logs/vhive.stdout) 2> >(tee -a /tmp/vhive-logs/vhive.stderr >&2)"
$ sleep 5;
$ ./scripts/cluster/create_one_node_cluster.sh > >(tee -a /tmp/vhive-logs/create_singlenode_cluster.stdout) 2> >(tee -a /tmp/vhive-logs/create_singlenode_cluster.stderr >&2)

Also, for my understanding, isn't Istio just used for serving the function endpoints ? Isn't the deployment of the functions independent of it ? I understand why function invocation won't work without it, but not clear about the deploying part.

ustiugov commented 2 years ago

Debugging function deployment in a failed knative cluster is not a good strategy. Let us focus on Istio first.

Please provide /tmp/vhive-logs/setup_worker_kubelet.* and /tmp/vhive-logs/create_singlenode_cluster.*.

aditya2803 commented 2 years ago

Sure. I tried running the cleanup and again starting the cluster a couple of times, so the logs are appended with that. Sorry about it.

/tmp/vhive-logs/setup_worker_kubelet.stdout logs

APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "0";
APT::Periodic::Unattended-Upgrade "0";
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "0";
APT::Periodic::Unattended-Upgrade "0";
containerd github.com/containerd/containerd v1.6.2 de8046a5501db9e0e478e1c10cbcfb21af4c6b2d
OK
🚧 Compile
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
0 209715200 thin-pool /dev/loop4 /dev/loop3 128 32768 1 skip_block_zeroing
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "0";
APT::Periodic::Unattended-Upgrade "0";
containerd github.com/containerd/containerd v1.6.2 de8046a5501db9e0e478e1c10cbcfb21af4c6b2d
OK
🚧 Compile
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
0 209715200 thin-pool /dev/loop6 /dev/loop5 128 32768 1 skip_block_zeroing
APT::Periodic::Update-Package-Lists "0";
APT::Periodic::Download-Upgradeable-Packages "0";
APT::Periodic::AutocleanInterval "0";
APT::Periodic::Unattended-Upgrade "0";
containerd github.com/containerd/containerd v1.6.2 de8046a5501db9e0e478e1c10cbcfb21af4c6b2d
OK
🚧 Compile
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
0 209715200 thin-pool /dev/loop12 /dev/loop11 128 32768 1 skip_block_zeroing

/tmp/vhive-logs/setup_worker_kubelet.stderr logs

Created symlink /etc/systemd/system/apt-daily.service → /dev/null.
Created symlink /etc/systemd/system/apt-daily-upgrade.service → /dev/null.
E: Unable to locate package skopeo
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
sudo: route: command not found
sudo: nft: command not found
sudo: nft: command not found
sudo: nft: command not found
sudo: nft: command not found
sudo: nft: command not found
sudo: nft: command not found
E: Unable to locate package skopeo
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
Warning: apt-key output should not be parsed (stdout is not a terminal)
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
device-mapper: reload ioctl on fc-dev-thinpool  failed: No such device or address
Command failed.
E: Unable to locate package skopeo
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
Warning: apt-key output should not be parsed (stdout is not a terminal)
fatal: destination path '/home/aditya/client' already exists and is not an empty directory.
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
E: Unable to locate package skopeo
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
Warning: apt-key output should not be parsed (stdout is not a terminal)
fatal: destination path '/home/aditya/client' already exists and is not an empty directory.
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument

/tmp/vhive-logs/create_singlenode_cluster.stdout logs

[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local spgblr-dyt-09] and IPs [10.96.0.1 10.138.143.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.503210 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: as63qq.en6z0w4fi61kpsvr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.138.143.25:6443 --token as63qq.en6z0w4fi61kpsvr \
        --discovery-token-ca-cert-hash sha256:66e83512c0e30b74d197786cf28a0f4eba4fc7cc153d09cf9c1eab12b326256e
node/spgblr-dyt-09 untainted
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
configmap/kube-proxy configured
namespace/metallb-system created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
secret/memberlist created
configmap/config created

Downloading istio-1.12.5 from https://github.com/istio/istio/releases/download/1.12.5/istio-1.12.5-linux-amd64.tar.gz ...

Istio 1.12.5 Download Complete!

Istio has been successfully downloaded into the istio-1.12.5 folder on your system.

Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /home/aditya/vhive/istio-1.12.5/bin directory to your environment path variable with:
         export PATH="$PATH:/home/aditya/vhive/istio-1.12.5/bin"

Begin the Istio pre-installation check by running:
         istioctl x precheck

Need more information? Visit https://istio.io/latest/docs/setup/install/
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
namespace/knative-serving created
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-features created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
horizontalpodautoscaler.autoscaling/activator created
poddisruptionbudget.policy/activator-pdb created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/domain-mapping created
deployment.apps/domainmapping-webhook created
service/domainmapping-webhook created
horizontalpodautoscaler.autoscaling/webhook created
poddisruptionbudget.policy/webhook-pdb created
deployment.apps/webhook created
service/webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
secret/domainmapping-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created
namespace/registry created
persistentvolume/docker-repo-pv created
persistentvolumeclaim/docker-repo-pvc created
replicaset.apps/docker-registry-pod created
service/docker-registry created
daemonset.apps/registry-etc-hosts-update created
job.batch/default-domain created
service/default-domain-service created
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/knative-local-gateway created
service/knative-local-gateway created
configmap/config-istio created
peerauthentication.security.istio.io/webhook created
peerauthentication.security.istio.io/domainmapping-webhook created
peerauthentication.security.istio.io/net-istio-webhook created
deployment.apps/net-istio-controller created
deployment.apps/net-istio-webhook created
secret/net-istio-webhook-certs created
service/net-istio-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created
namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-mt-adapter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
rolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-ping-defaults created
configmap/config-features created
configmap/config-kreference-mapping created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/pingsource-mt-adapter created
horizontalpodautoscaler.autoscaling/eventing-webhook created
poddisruptionbudget.policy/eventing-webhook created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/meta-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-bindings-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
role.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created
namespace/knative-eventing unchanged
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
rolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller-resolver created
serviceaccount/imc-dispatcher created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
configmap/config-imc-event-dispatcher created
configmap/config-observability unchanged
configmap/config-tracing configured
deployment.apps/imc-controller created
service/inmemorychannel-webhook created
service/imc-dispatcher created
deployment.apps/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
role.rbac.authorization.k8s.io/knative-inmemorychannel-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/inmemorychannel.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.inmemorychannel.eventing.knative.dev created
secret/inmemorychannel-webhook-certs created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
serviceaccount/mt-broker-filter created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
serviceaccount/mt-broker-ingress created
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
deployment.apps/mt-broker-filter created
service/broker-filter created
deployment.apps/mt-broker-ingress created
service/broker-ingress created
deployment.apps/mt-broker-controller created
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.110.46.166   192.168.1.240   15021:31995/TCP,80:32106/TCP,443:31066/TCP   5m18s
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local spgblr-dyt-09] and IPs [10.96.0.1 10.138.143.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.003029 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6pe1hk.zrosfdtcheqvjew7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.138.143.25:6443 --token 6pe1hk.zrosfdtcheqvjew7 \
        --discovery-token-ca-cert-hash sha256:874d8674e33941a21a4eb43813c30dee409fbfdf72a5e27db5a88c3ba8e312fd
node/spgblr-dyt-09 untainted
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
configmap/kube-proxy configured
namespace/metallb-system created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
secret/memberlist created
configmap/config created

Downloading istio-1.12.5 from https://github.com/istio/istio/releases/download/1.12.5/istio-1.12.5-linux-amd64.tar.gz ...

Istio 1.12.5 Download Complete!

Istio has been successfully downloaded into the istio-1.12.5 folder on your system.

Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /home/aditya/vhive/istio-1.12.5/bin directory to your environment path variable with:
         export PATH="$PATH:/home/aditya/vhive/istio-1.12.5/bin"

Begin the Istio pre-installation check by running:
         istioctl x precheck

Need more information? Visit https://istio.io/latest/docs/setup/install/
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
namespace/knative-serving created
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-features created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
horizontalpodautoscaler.autoscaling/activator created
poddisruptionbudget.policy/activator-pdb created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/domain-mapping created
deployment.apps/domainmapping-webhook created
service/domainmapping-webhook created
horizontalpodautoscaler.autoscaling/webhook created
poddisruptionbudget.policy/webhook-pdb created
deployment.apps/webhook created
service/webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
secret/domainmapping-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created
namespace/registry created
persistentvolume/docker-repo-pv created
persistentvolumeclaim/docker-repo-pvc created
replicaset.apps/docker-registry-pod created
service/docker-registry created
daemonset.apps/registry-etc-hosts-update created
job.batch/default-domain created
service/default-domain-service created
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/knative-local-gateway created
service/knative-local-gateway created
configmap/config-istio created
peerauthentication.security.istio.io/webhook created
peerauthentication.security.istio.io/domainmapping-webhook created
peerauthentication.security.istio.io/net-istio-webhook created
deployment.apps/net-istio-controller created
deployment.apps/net-istio-webhook created
secret/net-istio-webhook-certs created
service/net-istio-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created
namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-mt-adapter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
rolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-ping-defaults created
configmap/config-features created
configmap/config-kreference-mapping created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/pingsource-mt-adapter created
horizontalpodautoscaler.autoscaling/eventing-webhook created
poddisruptionbudget.policy/eventing-webhook created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/meta-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-bindings-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
role.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created
namespace/knative-eventing unchanged
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
rolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller-resolver created
serviceaccount/imc-dispatcher created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
configmap/config-imc-event-dispatcher created
configmap/config-observability unchanged
configmap/config-tracing configured
deployment.apps/imc-controller created
service/inmemorychannel-webhook created
service/imc-dispatcher created
deployment.apps/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
role.rbac.authorization.k8s.io/knative-inmemorychannel-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/inmemorychannel.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.inmemorychannel.eventing.knative.dev created
secret/inmemorychannel-webhook-certs created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
serviceaccount/mt-broker-filter created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
serviceaccount/mt-broker-ingress created
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
deployment.apps/mt-broker-filter created
service/broker-filter created
deployment.apps/mt-broker-ingress created
service/broker-ingress created
deployment.apps/mt-broker-controller created
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.100.129.155   192.168.1.240   15021:32503/TCP,80:30550/TCP,443:30395/TCP   5m22s
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local spgblr-dyt-09] and IPs [10.96.0.1 10.138.143.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost spgblr-dyt-09] and IPs [10.138.143.25 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.503265 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node spgblr-dyt-09 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6kix0t.4r4r5vhxogz2ywqo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.138.143.25:6443 --token 6kix0t.4r4r5vhxogz2ywqo \
        --discovery-token-ca-cert-hash sha256:5b43e41d27f166c17df05144564a09955b1a06f5d89b5a0a245d868542f58fc3
node/spgblr-dyt-09 untainted
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
configmap/kube-proxy configured
namespace/metallb-system created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
secret/memberlist created
configmap/config created

Downloading istio-1.12.5 from https://github.com/istio/istio/releases/download/1.12.5/istio-1.12.5-linux-amd64.tar.gz ...

Istio 1.12.5 Download Complete!

Istio has been successfully downloaded into the istio-1.12.5 folder on your system.

Next Steps:
See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /home/aditya/vhive/istio-1.12.5/bin directory to your environment path variable with:
         export PATH="$PATH:/home/aditya/vhive/istio-1.12.5/bin"

Begin the Istio pre-installation check by running:
         istioctl x precheck

Need more information? Visit https://istio.io/latest/docs/setup/install/
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
namespace/knative-serving created
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-serving-core created
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
image.caching.internal.knative.dev/queue-proxy created
configmap/config-autoscaler created
configmap/config-defaults created
configmap/config-deployment created
configmap/config-domain created
configmap/config-features created
configmap/config-gc created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-network created
configmap/config-observability created
configmap/config-tracing created
horizontalpodautoscaler.autoscaling/activator created
poddisruptionbudget.policy/activator-pdb created
deployment.apps/activator created
service/activator-service created
deployment.apps/autoscaler created
service/autoscaler created
deployment.apps/controller created
service/controller created
deployment.apps/domain-mapping created
deployment.apps/domainmapping-webhook created
service/domainmapping-webhook created
horizontalpodautoscaler.autoscaling/webhook created
poddisruptionbudget.policy/webhook-pdb created
deployment.apps/webhook created
service/webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
secret/domainmapping-webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
secret/webhook-certs created
namespace/registry created
persistentvolume/docker-repo-pv created
persistentvolumeclaim/docker-repo-pvc created
replicaset.apps/docker-registry-pod created
service/docker-registry created
daemonset.apps/registry-etc-hosts-update created
job.batch/default-domain created
service/default-domain-service created
clusterrole.rbac.authorization.k8s.io/knative-serving-istio created
gateway.networking.istio.io/knative-ingress-gateway created
gateway.networking.istio.io/knative-local-gateway created
service/knative-local-gateway created
configmap/config-istio created
peerauthentication.security.istio.io/webhook created
peerauthentication.security.istio.io/domainmapping-webhook created
peerauthentication.security.istio.io/net-istio-webhook created
deployment.apps/net-istio-controller created
deployment.apps/net-istio-webhook created
secret/net-istio-webhook-certs created
service/net-istio-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created
namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-mt-adapter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
rolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-ping-defaults created
configmap/config-features created
configmap/config-kreference-mapping created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/pingsource-mt-adapter created
horizontalpodautoscaler.autoscaling/eventing-webhook created
poddisruptionbudget.policy/eventing-webhook created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/meta-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-bindings-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
role.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created
namespace/knative-eventing unchanged
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
rolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller-resolver created
serviceaccount/imc-dispatcher created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
configmap/config-imc-event-dispatcher created
configmap/config-observability unchanged
configmap/config-tracing configured
deployment.apps/imc-controller created
service/inmemorychannel-webhook created
service/imc-dispatcher created
deployment.apps/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
role.rbac.authorization.k8s.io/knative-inmemorychannel-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/inmemorychannel.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.inmemorychannel.eventing.knative.dev created
secret/inmemorychannel-webhook-certs created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
serviceaccount/mt-broker-filter created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
serviceaccount/mt-broker-ingress created
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
deployment.apps/mt-broker-filter created
service/broker-filter created
deployment.apps/mt-broker-ingress created
service/broker-ingress created
deployment.apps/mt-broker-controller created
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.106.207.86   192.168.1.240   15021:32734/TCP,80:31659/TCP,443:31125/TCP   5m20s

/tmp/vhive-logs/create_singlenode_cluster.stderr logs

+++ dirname ./scripts/cluster/create_one_node_cluster.sh
++ cd ./scripts/cluster
++ pwd
+ DIR=/home/aditya/vhive/scripts/cluster
++ cd /home/aditya/vhive/scripts/cluster
++ cd ..
++ cd ..
++ pwd
+ ROOT=/home/aditya/vhive
+ STOCK_CONTAINERD=
+ /home/aditya/vhive/scripts/cluster/setup_worker_kubelet.sh
+ '[' '' == stock-only ']'
+ CRI_SOCK=/etc/vhive-cri/vhive-cri.sock
+++ cat /proc/1/cpuset
++ basename /
+ CONTAINERID=/
+ '[' 64 -eq 1 ']'
+ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /etc/vhive-cri/vhive-cri.sock --pod-network-cidr=192.168.0.0/16
I0523 12:45:28.989075   60812 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
+ mkdir -p /home/aditya/.kube
+ sudo cp -i /etc/kubernetes/admin.conf /home/aditya/.kube/config
++ id -u
++ id -g
+ sudo chown 1001:27 /home/aditya/.kube/config
+ '[' 1001 -eq 0 ']'
+ kubectl taint nodes --all node-role.kubernetes.io/master-
+ /home/aditya/vhive/scripts/cluster/setup_master_node.sh
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   101  100   101    0     0    349      0 --:--:-- --:--:-- --:--:--   349
100  4926  100  4926    0     0   6928      0 --:--:-- --:--:-- --:--:--  6928
! values.global.jwtPolicy is deprecated; use Values.global.jwtPolicy=third-party-jwt. See http://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for more information instead

- Processing resources for Istio core.
✔ Istio core installed
- Processing resources for Istiod.
- Processing resources for Istiod. Waiting for Deployment/istio-system/istiod
✔ Istiod installed
- Processing resources for Ingress gateways.
- Processing resources for Ingress gateways. Waiting for Deployment/istio-system/cluster-local-gateway, Deployment/istio-system/istio-ingressgateway
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
  Deployment/istio-system/cluster-local-gateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=cluster-local-gateway-74c4558686-ncbjb_istio-system(67dc64df-0d90-4d43-aa1e-e4ed458f1f90))
  Deployment/istio-system/istio-ingressgateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=istio-ingressgateway-f5b59cc7c-bj9mc_istio-system(06717413-71bf-409a-b6e0-98309676c0c3))
- Pruning removed resourcesError: failed to install manifests: errors occurred during operation
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
+++ dirname ./scripts/cluster/create_one_node_cluster.sh
++ cd ./scripts/cluster
++ pwd
+ DIR=/home/aditya/vhive/scripts/cluster
++ cd /home/aditya/vhive/scripts/cluster
++ cd ..
++ cd ..
++ pwd
+ ROOT=/home/aditya/vhive
+ STOCK_CONTAINERD=
+ /home/aditya/vhive/scripts/cluster/setup_worker_kubelet.sh
+ '[' '' == stock-only ']'
+ CRI_SOCK=/etc/vhive-cri/vhive-cri.sock
+++ cat /proc/1/cpuset
++ basename /
+ CONTAINERID=/
+ '[' 64 -eq 1 ']'
+ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /etc/vhive-cri/vhive-cri.sock --pod-network-cidr=192.168.0.0/16
I0523 13:07:32.510735   97167 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
+ mkdir -p /home/aditya/.kube
+ sudo cp -i /etc/kubernetes/admin.conf /home/aditya/.kube/config
++ id -u
++ id -g
+ sudo chown 1001:27 /home/aditya/.kube/config
+ '[' 1001 -eq 0 ']'
+ kubectl taint nodes --all node-role.kubernetes.io/master-
+ /home/aditya/vhive/scripts/cluster/setup_master_node.sh
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   101  100   101    0     0    353      0 --:--:-- --:--:-- --:--:--   354
100  4926  100  4926    0     0   6889      0 --:--:-- --:--:-- --:--:--  6889
! values.global.jwtPolicy is deprecated; use Values.global.jwtPolicy=third-party-jwt. See http://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for more information instead

- Processing resources for Istio core.
✔ Istio core installed
- Processing resources for Istiod.
- Processing resources for Istiod. Waiting for Deployment/istio-system/istiod
✔ Istiod installed
- Processing resources for Ingress gateways.
- Processing resources for Ingress gateways. Waiting for Deployment/istio-system/cluster-local-gateway, Deployment/istio-system/istio-ingressgateway
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
  Deployment/istio-system/cluster-local-gateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=cluster-local-gateway-74c4558686-frksv_istio-system(25eb9504-13a3-410a-b986-6bcfc0170331))
  Deployment/istio-system/istio-ingressgateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=istio-ingressgateway-f5b59cc7c-l824g_istio-system(3aeafca5-0555-4429-8d23-05e3cddc49f4))
- Pruning removed resourcesError: failed to install manifests: errors occurred during operation
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
+++ dirname ./scripts/cluster/create_one_node_cluster.sh
++ cd ./scripts/cluster
++ pwd
+ DIR=/home/aditya/vhive/scripts/cluster
++ cd /home/aditya/vhive/scripts/cluster
++ cd ..
++ cd ..
++ pwd
+ ROOT=/home/aditya/vhive
+ STOCK_CONTAINERD=
+ /home/aditya/vhive/scripts/cluster/setup_worker_kubelet.sh
+ '[' '' == stock-only ']'
+ CRI_SOCK=/etc/vhive-cri/vhive-cri.sock
+++ cat /proc/1/cpuset
++ basename /
+ CONTAINERID=/
+ '[' 64 -eq 1 ']'
+ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /etc/vhive-cri/vhive-cri.sock --pod-network-cidr=192.168.0.0/16
I0523 13:23:36.520089  122821 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
+ mkdir -p /home/aditya/.kube
+ sudo cp -i /etc/kubernetes/admin.conf /home/aditya/.kube/config
++ id -u
++ id -g
+ sudo chown 1001:27 /home/aditya/.kube/config
+ '[' 1001 -eq 0 ']'
+ kubectl taint nodes --all node-role.kubernetes.io/master-
+ /home/aditya/vhive/scripts/cluster/setup_master_node.sh
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   101  100   101    0     0    351      0 --:--:-- --:--:-- --:--:--   351
100  4926  100  4926    0     0   6851      0 --:--:-- --:--:-- --:--:-- 16474
! values.global.jwtPolicy is deprecated; use Values.global.jwtPolicy=third-party-jwt. See http://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for more information instead

- Processing resources for Istio core.
✔ Istio core installed
- Processing resources for Istiod.
- Processing resources for Istiod. Waiting for Deployment/istio-system/istiod
✔ Istiod installed
- Processing resources for Ingress gateways.
- Processing resources for Ingress gateways. Waiting for Deployment/istio-system/cluster-local-gateway, Deployment/istio-system/istio-ingressgateway
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
  Deployment/istio-system/cluster-local-gateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=cluster-local-gateway-74c4558686-r6h8b_istio-system(ee188c65-566e-470e-a91b-e672bc193fb3))
  Deployment/istio-system/istio-ingressgateway (container failed to start: CrashLoopBackOff: back-off 2m40s restarting failed container=istio-proxy pod=istio-ingressgateway-f5b59cc7c-h9qcr_istio-system(868240ef-8d43-464e-93a9-d73d9d458c3a))
- Pruning removed resourcesError: failed to install manifests: errors occurred during operation
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
ustiugov commented 2 years ago

I suggest to clean up the node again with ./scripts/github_runner/clean_cri_runner.sh and delete the logs. Then, try to run the setup script again and see if Istio gets installed. If not, this is some issue with the node setup, which is not vHive-specific and probably should be reported to Knative and/or Istio maintainers.

Please crop the logs in future and attach full logs as files if necessary, otherwise the issue thread becomes unmanageably long.

aditya2803 commented 2 years ago

Hi @ustiugov, apologies for the full log files, I'll keep it in mind next time onward.

After running the clean* script, and redeploying the cluster, istio suddenly was deployed perfectly :) I did try this a few hours earlier, but somehow did not work that time. Anyways, all pods are running okay now.

Also, functions are getting deployed and invoked normally now, and I am getting the final output in the rps1.00_lat.csv file.

Thanks a lot for your support. Also, shall I submit a PR for the changes in the set-up guide, including the KVM check-up etc ?

ustiugov commented 2 years ago

@aditya2803 glad to hear! we always welcome improvements from the community 👍 please close the Issue if it's resolved