Closed dominar250 closed 2 weeks ago
Thanks for opening your first issue here! Be sure to follow the issue template!
@dominar250 I cleanup the description a bit, please let me know if i messed it up. I do not understand your full scenario yet. you are pinging the console proxy but that does not reflect in your description.
Can you expand a bit on the scenario and the problem? Is this initial packet drop concequently reproducible and is it true for the SSVM as well?
@DaanHoogland Thanks for addressing this issue. From SSVM/consoleproxy i can able to ping my kvm machine and management server which is stable . But I can't get constant ping from kvm or management server to the public/private ip address for ssvm and console vm. ping is toggling , initial level or after couple of pings
@DaanHoogland Thanks for addressing this issue. From SSVM/consoleproxy i can able to ping my kvm machine and management server which is stable . But I can't get constant ping from kvm or management server to the public/private ip address for ssvm and console vm. ping is toggling , initial level or after couple of pings
@dominar250 it might be cause public/private IP of system vms are in the same range can you share output of "ip a" in the system vms ?
@DaanHoogland root@s-55-VM:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 0e:00:a9:fe:88:07 brd ff:ff:ff:ff:ff:ff altname enp0s3 altname ens3 inet 169.254.136.7/16 brd 169.254.255.255 scope global eth0 valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 1e:00:f7:00:00:02 brd ff:ff:ff:ff:ff:ff altname enp0s4 altname ens4 inet 10.158.65.24/25 brd 10.158.65.127 scope global eth1 valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 1e:00:64:00:00:15 brd ff:ff:ff:ff:ff:ff altname enp0s5 altname ens5 inet 10.158.65.13/25 brd 10.158.65.127 scope global eth2 valid_lft forever preferred_lft forever
@dominar250 I think it is because your systemvm have two nics/ips in the same range typically the public and private should use different ip range and vlan/vni.
@weizhouapache So cloudstack could'nt work with single ip range ?
@weizhouapache So cloudstack could'nt work with single ip range ?
There may be some issues with vm console and ssvm (download template/volume, etc). Everything else should work. It is not recommended to use single ip range. If you are just playing with cloudstack, it is ok. If you want to run a product env, you need a better design.
@weizhouapache Thanks for the suggestion. Is it a good idea to run management, KVM, and NFS on a single machine? Does this setup cause routing-related issues? My plan is to use CloudStack only for Kubernetes cluster development
sounds good @dominar250 , but still a separate ip range for the host and for the SVMs (and VMs in general) is best.
ISSUE TYPE
COMPONENT NAME
CLOUDSTACK VERSION
CONFIGURATION
OS / ENVIRONMENT
SUMMARY
I'm using nested environment over vmware. promiscuous mode and MAC address change ,Forged Transmit are enabled in port group level. SSVM and console proxy agents are up.I'm having issue with packet loss from kvm machine to SSVM /Console proxy vm both private and public ip address. also check.sh scripts works occasionally.
STEPS TO REPRODUCE
EXPECTED RESULTS
ping should be working fine on both private and public ips.
ACTUAL RESULTS