Closed pichumanichellappa closed 1 year ago
Hello @pichumanichellappa ,
We won't be able to include a solution for your issue in Kanto M4 release, which is planned for end of August 2023/early September 2023, since we are already in hardening phase with the M4 release and we cannot move the M4 release date further.
How critical is this issue to you? Can you wait with its resolution in the next Kanto M5 release, planned for end of November 2023?
@ttttodorov @k-gostev
In that case, could you give us a temporary solution that helps us to proceed with implementaion ? may be like enabling any additional ports or some other possible ways to get around the issue ?
Hello @pichumanichellappa, I have tried to reproduce the problem, but without success. Do I need only to run _iptables_basicconfiguration.sh and have running containers or there is more to the setup?
The behavior you described might be caused by a constantly restarting container, by default the restart policy is unless-stopped. So if a container process exits on itself(might be caused by the new host iptables config, otherwise it runs okey), it will be restarted automatically by the container management and you will get traces as a new virtual interface is set up on every container start. You may try to set the restart policies of the running containers to no and see whether the traces stop and containers remain in exited state.
kanto-cm update <ctr-id> --rp no
Hello @pichumanichellappa, I have tried to reproduce the problem, but without success. Do I need only to run _iptables_basicconfiguration.sh and have running containers or there is more to the setup?
The behavior you described might be caused by a constantly restarting container, by default the restart policy is unless-stopped. So if a container process exits on itself(might be caused by the new host iptables config, otherwise it runs okey), it will be restarted automatically by the container management and you will get traces as a new virtual interface is set up on every container start. You may try to set the restart policies of the running containers to no and see whether the traces stop and containers remain in exited state.
kanto-cm update <ctr-id> --rp no
Hello @dimitar-dimitrow
Thanks for the pre-analysis. yes, the steps are jsut
No more steps as far as i undertstand. I will try the "kanto-cm update
@pichumanichellappa just confirmed that the containers are constantly restarting and flooding the logs. The next steps is to check containers logs and find why those container processes exit with the restricted iptables configuration set. As the container images are in-house ones and there is no unexpected behavior from kanto container management side I will change the issue label to task.
We are able to configure our system's iptables setting without affecting kanto container manager requirement. Hence closing the issue.
But there is a bug/unexpected behavior from my perspective is that, I could find that below config(ip_tables:Permit the IP tables rules) in the container manager configuration file does not have any effect.
"ip_tables": false,
We are configuring our firewall/iptables in the system to have a set of security rules. One of them is allowing only SSH connection to the target by only enabling port 22(SSH port).
But once we do that we keep getting traces related to virtual Ethernet interface creation. For me, it appears like, the virtual interfaces keep getting reset internally.
I have attached the iptable rule set for reference. Rules are in the file: iptables_basic_configuration.zip