Closed liad5h closed 2 years ago
also adding consul client config file
{
"server": false,
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"datacenter": "eu-central-1",
"data_dir": "/var/lib/consul",
"log_level": "INFO",
"retry_join": ["provider=k8s kubeconfig=/var/lib/consul/.kube.config namespace=consul label_selector=\"app=consul,component=server\""],
"verify_incoming": false,
"verify_incoming": false,
"acl": {
"tokens": {
"agent": "<token>"
},
"enabled": true,
"down_policy": "extend-cache",
"enable_token_persistence": true
},
"encrypt": "<encrypt>",
"encrypt_verify_incoming": true,
"encrypt_verify_outgoing": true,
"primary_datacenter": "eu-central-1"
}
Hi @liad5h, thank you so much for reaching out. It seems from what you have provided that you are doing everything right. We have seen flapping like that before when a user has not allowed access to their cluster via both TCP and UDP, but you clearly have.
I'm thinking this may have to do with an infrastructure problem where your ports may not be properly open for pod IPs.
I assume you have looked at this documentation already given the completeness of your question, but there is a docs page on running Consul clients outside of Kubernetes. There may be a detail in there that will help.
I'm sorry I can't see anything wrong with what you are sharing here.
Hey @t-eckert
What is the best way to verify if one of my ports are closed? I used telnet for tcp and nc for udp.
i will try to follow this guide again and see if I missed anything
Maybe you can point me to the port that should cause such an issue if blocked?
I found out that only client agents that are running on docker inside my ec2 instances are failing, I guess because their registered address is not routable.
When I set the advertise_addr
property in the consul config file to the routable ip address of the instance, the issue is resolved.
Question
I have a consul server (1.9.2) running on AWS EKS with 3 pods and no clients on EKS. EKS version: v1.18.20-eks-c9f1ce Helm chart version: 0.39.0 I am trying to connect a client running on AWS EC2 (docker) to the server. ports 8300-8302 (TCP) and 8301-8302 (UDP) are open both on the server and the client.
In consul members I see the client status is alive:
But I still see the client go up and down in the consul UI. In the logs I see the following logs non-stop:
I tried working with the all of the following configurations for the consul server, all did not work:
Provide a clear description of the question you would like answered with as much detail as you can provide (links to docs, gists of commands). If you are reporting a feature request or issue, please use the other issue types instead. If appropriate, please use the sections below for providing more details around your configuration, CLI commands, logs, and environment details!
Please search the existing issues for relevant questions, and use the reaction feature (https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to add upvotes to pre-existing questions.
More details will help us answer questions more accurately and with less delay :) -->
CLI Commands (consul-k8s, consul-k8s-control-plane, helm)
Helm Configuration
Logs
logs from the client:
logs from the server:
consul members from server, several checks with a few seconds difference:
Current understanding and Expected behavior
Environment details
Since the ports are opened and the client is sometimes connected to the servers, it is expected that it will remain connected.
Additional Context