Open brncsk opened 2 weeks ago
Did this use to work before @nc-marco 's change?
@cyberw I don't know, but if there's a specific version I should try, please let me know.
Hello Lars and Adam,
Just some background. The patch I added was done to deal with the issue when in EKS, the host itself supported IPv6, that is the container, while the infrastructure did not. This seems to be exactly the case you have. Before this patch, locust would switch to IPv6 if it noted the host supported IPv6. I simply added a check to prevent switching to IPv6 unless the name resolution of the master node resulted in an IPv6 address. This in my view made sense since I couldn't imagine an infrastructure which would resolve a hostname to an IPv6 address if the infrastructure was not capable of routing IPv6 addresses. This is the situation in EKS. That is, even though the containers support the IPv6, CoreDNS in the kubernetes cluster itself does not return IPv6 addresses, only IPv4. It seems you have found an exception to this rule and I would say what AWS is doing here, injecting IPv6 addresses in /etc/hosts when IPv6 addresses are not supported, is rather perplexing.
Having said this, before my patch, I believe it would have switched to using IPv6 anyways since my patch was added to avoid this in the first place when it didn't make sense. But please go ahead and try this and let me know. It is frustrating that AWS is doing something which seems to make no sense and having to find a workaround for it. I had thought about adding a flag to force IPv4 to solve our issue with EKS, but I thought my solution was sufficient and avoided adding complication to the code and use of locust.
regards,
Marco
On Mon, Nov 11, 2024 at 9:11 AM Ádám Barancsuk @.***> wrote:
@cyberw https://github.com/cyberw I don't know, but if there's a specific version I should try, please let me know.
— Reply to this email directly, view it on GitHub https://github.com/locustio/locust/issues/2979#issuecomment-2467489675, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2B4QISJ3YQJZSS7EJ2FXXT2ABRCFAVCNFSM6AAAAABRRGIAC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRXGQ4DSNRXGU . You are receiving this because you were mentioned.Message ID: @.***>
Marco's fix was introduced in 2.32.0, so you can try the one just before, which is 2.31.8
Sure, thanks both of you for the heads up – I'll try 2.31.8 and report back later!
Prerequisites
Description
Locust workers cannot connect to the master if both are run in ECS tasks connected by ECS Service Connect.
Service Connect injects both the master container's v4 and v6 addresses into
/etc/hosts
, which I reckon tells locust to use IPv6 as described in #2923:However this does not work because some part of the stack does not support IPv6 (I never managed to determine which part, exactly).
I tried disabling IPv6 on the kernel level, so that
sysctl net.ipv6.conf.all.disable_ipv6
does report back1
– but it's still not working.A workaround is to remove the problematic line in a custom entrypoint script:
Command line
N/A
Locustfile contents
Python version
Python 3.11.10
Locust version
2.32.2
Operating system
Linux ip-x-x-x-x.eu-central-1.compute.internal 5.10.226-214.880.amzn2.x86_64 #1 SMP Tue Oct 8 16:18:15 UTC 2024 x86_64 GNU/Linux