Closed hhjfdkl closed 2 months ago
From what I can tell all of the things are set up properly, and when making manual dns calls to the employee service through the dnsutil running on our cluster, I get no errors.
When we run the service, the gateway, and Eureka locally the issue does not persist.
I believe that the issue we are experiencing may be caused by having our pods running on the cluster while using local instances of those services connected to our remote eureka server at the same time. Its a stretch, but its the last thing I can think of.
Since all things work locally, I want to actually push the current versions of employee service and the gateway to Kubernetes and troubleshoot from there.
If this does not fix the issue, I want to reach out to Brian to see if he can spot anything abnormal about our Kubernetes setup.
It seems that the issue may have been twofold.
eureka.instance.hostname
property was not properly assigned.When these two steps were addressed the issue went away. It was a good few hours of panic though, I might add.
For future services, their application.yml
should include the following eureka settings:
eureka:
client:
service-url:
defaultZone: http://20.242.136.134:8761/eureka # Default value for eureka
instance:
hostname: <service-name-here>.default.svc.cluster.local
The hostname
in this example will reflect the dns address of the service the application.yml file is attached to.
If the service needs be exposed externally, the hostname value should just be its external IP.
Issue is resolved and now working. The communication between services is working as intended.
There's an issue connecting to the Kubernetes instance of our employee microservice java.net.UnknownHostException and an issue with io.netty.resolver.dns.DnsResolveContext.finishResolve (seems to be an issue resolving the DNS)