Open yb01 opened 2 years ago
Actually when I thought it a bit more. This behavior might be by-designed.
The difference is that Mizar is different from the “traditional” CNI to have a flat container networking mechanism so that all pods can be access at the host level. With Mizar, the pod is restrained in the VPC/subnet boundary and the boundary cannot be accessed from the host.
With that, I tried to deploy a container pod under the same tenant/namespace and the VM can be accessed from the POD where they are under the same VPC/subnet. As shown below:
So what we need think of with mizar are, this probably are not blockers to 130 release.
root@ip-172-31-39-83:~/go/src/k8s.io/arktos# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0002] connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
WARN[0002] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0004] connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
1c2a04f28d013 bfe3a36ebd252 15 seconds ago Running coredns 1 e06658643c73f
744fe34a086d9 4d816efab7b24 41 seconds ago Running netctr 0 18f0e88558c40
3ef846d16c39a 4b2e93f0133d3 2 minutes ago Running sidecar 0 4dc385a408dc7
a41d422057b58 6dc8ef8287d38 2 minutes ago Running dnsmasq 0 4dc385a408dc7
2366d3f80fa65 ebfc28c4ed971 2 minutes ago Running mizar-daemon 0 984c2fba29c0c
ae2d1d17a79e1 6c1b05c02f906 3 minutes ago Running vms 0 ec2a5750101a5
9ed9d92a776b7 6c1b05c02f906 3 minutes ago Running virtlet 0 ec2a5750101a5
240f4f71c2e27 6c1b05c02f906 3 minutes ago Running libvirt 0 ec2a5750101a5
638614aed03f1 74613191ee383 3 minutes ago Running mizar-operator 0 fc5b957e89ea6
root@ip-172-31-39-83:~/go/src/k8s.io/arktos# crictl exec -it 744fe34a086d9 /bin/bash
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0002] connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
root@netpod1-1:/# ping 21.0.21.4
PING 21.0.21.4 (21.0.21.4) 56(84) bytes of data.
64 bytes from 21.0.21.4: icmp_seq=1 ttl=64 time=3.76 ms
64 bytes from 21.0.21.4: icmp_seq=2 ttl=64 time=0.610 ms
64 bytes from 21.0.21.4: icmp_seq=3 ttl=64 time=0.550 ms
^C
--- 21.0.21.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2015ms
rtt min/avg/max/mdev = 0.550/1.641/3.763/1.500 ms
root@netpod1-1:/# ssh [cirros@21.0.21.4](mailto:cirros@21.0.21.4)
The authenticity of host '21.0.21.4 (21.0.21.4)' can't be established.
ECDSA key fingerprint is SHA256:sya8/VYwhvSG9TqglyTbHcve5Wo40qWz2OLgcmVoTBY.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '21.0.21.4' (ECDSA) to the list of known hosts.
[cirros@21.0.21.4's](mailto:cirros@21.0.21.4's) password:
Permission denied, please try again.
[cirros@21.0.21.4's](mailto:cirros@21.0.21.4's) password:
$
Not 1/30 release blocker.
@yb01 Can you please retest? I believe with Phu's latest change that creates a virtual interface and static route for the system-default and user VPCs, you should be able to access the VM as long as there's no IP collision.
Note: In the bridge CNI case, you don't have the concept of VPC isolation that Mizar provides so it works as you have a flat network. JMHO.
Assigned to @yb01 to retest with latest POC code. Try accessing VM pod from api master vm.
tried the latest mizar build on one box, which is master and worker at the same node. still not able to ssh to the vm as it expected to be able to access VM pod from masters.
punt to pose 130 release. for now one have to use option 3 till we have a services functioning and verified with option 2.
What happened:
in the case below, this is the veth for the VM POD
veth-a1bfdb1a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
What you expected to happen:
like with the bridge cni, the IP should be accessiable
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: Here the qemu log:
Environment:
Mizar version:
Cloud provider or hardware configuration:
OS (e.g:
cat /etc/os-release
): Ubuntu 18.04 with kernel updateKernel (e.g.
uname -a
):Install tools:
Network plugin and version (if this is a network-related bug):
Others: