Closed salexer closed 3 years ago
Hi @salexer. Thanks for your patience in me coming back to you.
I haven't experienced this issue myself, but the message - "Another app is currently holding the xtables lock" - suggests precisely that you have another process running that is holding the xtables lock. iptables
needs to obtain this lock in order to modify iptables (xtables) rules, and docker-ingress-routing-daemon
needs to call iptables
- hence the problem.
Can you first check you still get this error message, and if so, then inspect your process table to find out what process could be holding the xtables lock? If in doubt, please share any candidate processes, or a full process list. I am not familiar with what processes do this, but wonder if some kind of firewall rule management daemon may be doing this.
N.B. I don't believe adding the -w
option to iptables
would help, as I suspect it would just cause iptables
to block.
Dear struanb, I tried to follow the instructions you provided in the Docker Swarm cluster. However, the expected results were not obtained.
Our cluster has 7 nodes and the ingress network is located at 10.255.0.0/16. The service we expect to obtain the real IP is the multiple copies of Nginx deployed in the cluster. I reduced the number of container copies of the Nginx service to 0. Then docker-ingress-routing-daemon --ingress-gateway-ips <Node Ingress IP List> --install
was run on each node, and then the number of container copies of the Nginx service was expanded to the original number.
Such operations cause all the exposed ports of the services deployed in the cluster to be unlinkable, for example, Portainer's port 9000 cannot be accessed.
According to the minimum principle, we chose node A (single node) as the node for load balancing node and service replica deployment. Follow your instructions, just run docker-ingress-routing-daemon --ingress-gateway-ips <Node Ingress IP List> --install
on node A. This time, the port of the Nginx service can be linked, and the log shows the real IP, but the reverse proxy rules configured in Nginx are all abnormal. The Nginx http log shows that the return code of the request from the reverse proxy to other ports in the cluster is 499. At this time, other services deployed on the node still have port request failures, but the node service of the docker-ingress-routing-daemon --ingress-gateway-ips <Node Ingress IP List> --install
command is normal without running docker-ingress-routing-daemon --ingress-gateway-ips <Node Ingress IP List> --install
. For example, requesting port 9000 of node A fails, but requesting port 9000 of other nodes can access Portainer normally.
Have we misunderstood the usage method you provided? Or is there something wrong with our operation? We are very eager to use your daemon,Thank you!
Hi @Jasonlxl. Thanks for trying DIRD. Would you mind copying your question into a separate issue? I think this is unrelated to xtables.lock. I will consider your question in the meantime, and will reply as soon as you have created the new issue.
Hi @Jasonlxl. Thanks for trying DIRD. Would you mind copying your question into a separate issue? I think this is unrelated to xtables.lock. I will consider your question in the meantime, and will reply as soon as you have created the new issue.
Alright,I've created a new issue. Thank you!
Since the original error message - "Another app is currently holding the xtables lock" - suggests that there exists another process running holding the xtables lock, and this doesn't appear to be an issue with DIRD, I am closing this issue.
Hi @struanb .
I`m trying to run your script. But it doesn't work.
before 2021-04-01.21:51:24.497593 and 2021-04-01.21:57:53.416856 I changed sacle service from 0 to 1...
Ubuntu 20.04.1 LTS Docker version 19.03.13, build 4484c46d9d
Thank you!