Netflix / eureka

AWS Service registry for resilient mid-tier load balancing and failover.
Apache License 2.0
12.44k stars 3.75k forks source link

Instance status keep STARTING in eureka server even it is UP on Local #1174

Open addozhang opened 5 years ago

addozhang commented 5 years ago

At first, it is very difficult to reproduce. Even, this case is very rare but fatal. Currently, we met 3 times of 60+ services and 150+ instances in past two months.

eureka-client: 1.7.2 spring-boot: 1.5.12.RELEASE spring-cloud: Edgware.SR3

What we get

Instance info on remote

image

Status info on local

heap dump

image

Instance info in Heartbeat request

tcp dump

image

Scope Status lastDirtyTimestamp lastUpdatedTimestamp registrationTimestamp
Request UP 1545039481813
Local UP 1545039481813 1545039472888
Remote STARTING 1545039481813 1545039481899 1545039481898

Source Code Analysis

InstanceInfo initialization

The initialization is via InstanceInfoFactory#create() and status is STARTING

Instance registration

The real logic of registration is in DiscoveryClient#register(). But this method has two invokers.

I. Main flow registration

EurekaAutoServiceRegistration implements SmartLifecycle interface, its instance is initialized in EurekaClientAutoConfiguration#eurekaAutoServiceRegistration().

EurekaAutoServiceRegistration#start() method is invoker after all spring bean initialized and arranges EurekaServiceRegistry to register EurekaRegistration:

public void start() {
    ...
    if (!this.running.get() && this.registration.getNonSecurePort() > 0) {
        this.serviceRegistry.register(this.registration);
        this.context.publishEvent(
                new InstanceRegisteredEvent<>(this, this.registration.getInstanceConfig()));
        this.running.set(true);
    }
}

EurekaRegistration#register(): it set the instance status to initial stats UP which can be configured by eureka.instance.initial-status. During status updating, stats changed and timestamp marked down in lastDirtyTimestamp. Once status changed, StatusChangeListener#notify() is triggerred


public void register(EurekaRegistration reg) {
    maybeInitializeClient(reg);
    if (log.isInfoEnabled()) {
        log.info("Registering application " + reg.getInstanceConfig().getAppname()
                + " with eureka with status "
                + reg.getInstanceConfig().getInitialStatus());
    }
    //1
    reg.getApplicationInfoManager()
            .setInstanceStatus(reg.getInstanceConfig().getInitialStatus());
    if (reg.getHealthCheckHandler() != null) {
       //2
        reg.getEurekaClient().registerHealthCheck(reg.getHealthCheckHandler());
    }
}

DiscoveryClient has a inner anonymous class implementing StatusChangeListener which will invoke InstanceInfoReplicator#onDemandUpdate()

statusChangeListener = new ApplicationInfoManager.StatusChangeListener() {
    @Override
    public String getId() {
        return "statusChangeListener";
    }
    @Override
    public void notify(StatusChangeEvent statusChangeEvent) {
        if (InstanceStatus.DOWN == statusChangeEvent.getStatus() ||
                InstanceStatus.DOWN == statusChangeEvent.getPreviousStatus()) {
            // log at warn level if DOWN was involved
            logger.warn("Saw local status change event {}", statusChangeEvent);
        } else {
            logger.info("Saw local status change event {}", statusChangeEvent);
        }
        instanceInfoReplicator.onDemandUpdate(); 
    }
};

InstanceInfoReplicator is initialized during DiscoveryClient#initScheduledTasks() and it an implementation of Runnable. After initialized, it is started by ScheduledThreadPoolExecutor. Likewise, #onDemandUpdate() will start new thread with ScheduledThreadPoolExecutor.

In #run() method, it invokes DiscoveryClient#refreshInstanceInfo() to update instance status vi HealthCheckHandler. Then, DiscoveryClient#register() invoked.

II. Registeration in Heartbeat thread

DiscoveryClient#initScheduledTasks() mentioned above. It starts other threads except InstanceInfoReplicator. Such as HeartbeatThread. The HeartbeatThread will report instance status and its lastDirtyTimestamp to remote periodically via PUT type HTTP request.

The request may have two response status: 404 and 200. The first one indicates there is no such instance registered on remote; the 2nd one means status report successfully.

If 404 received, the thread will invoke DiscoveryClient#register() to register instance.

Inference

The status on remote is STARTING, we can be sure it is registered by heartbeat thread.

There are serval time points:

On remote, InstanceResource#renewLease() -> InstanceResource#validateDirtyTimestamp(): if the lastDirtyTimestamp in request is same as the one stored in registry. It WILL NOT update status even the stats in request is UP and the stored is STARTING.

Why this happens

Both DiscoveryClient and EurekaAutoServiceRegistration are initialized in EurekaClientAutoConfiguration. The latter depends on the former.

HeartbeatThread thread is started delay: eureka.instance.lease-renewal-interval-in-seconds can be used to configure delay time. Default is 30s, In our case it is configured as 10s.

This means there is race condition between Main flow and Heartbeat thread. Duration registration in heartbeat thread, it serializes instance info to JSON which gets its status first, then lastDirtyTimestamp. Between them, Main flow does status change in EurekaRegistration#register().

Further, EurekaAutoServiceRegistration#start() is invoked after all bean initialized. This means the cost is up to 10s for all beans' initialization.

Solution

In EurekaJacksonCodec$InstanceInfoSerializer#serialize(), move #autoMarshalEligible() invocation after line of jgen.writeStartObject(). This results gain of lastDirtyTimestamp is earlier than status. With this, it can be sure the status in first registration request align to real, even the origin lastDirtyTimestamp is earlier than real one. But this can be fix in later heartbeat reqeust.

image

troshko111 commented 5 years ago

Thanks for putting an effort to describe the issue, correct me if I'm wrong but this does not seem to have anything to do with Eureka but rather is a misconfiguration of your Spring registration / integration code, I don't believe you should have two registration flows competing in the first place.

Also the proposed fix does not actually fix the race condition but reduces its probability?

I may have misunderstood this as it's a pretty involved one, in this case please reopen.

addozhang commented 5 years ago

@troshko111 sorry for late reply. I created a btrace script to reproduce it.

In my case, I set the renew interval to 5s.

There is really two registration flows during application starting:

  1. InstanceInfoReplicator, this starts to register instance triggered by status change (ApplicationInfoManager.StatusChangeListener in DiscoveryClient).
  2. DiscoveryClient$HeartbeatThread, this thread starts with delay renewInterval to DiscoveryClient initiated. It sends heartbeat request to eureka server. If response code is 404, it will do registration instead.

Both flows above will invoke AbstractJerseyEurekaHttpClient$register() and use EurekaJacksonCodec$InstanceInfoSerializer to serialize instance object. During serializing, it records status first, then records lastDirtyTimestamp.

In particularly extreme case, HeartbeatThread records status as STARTING and records lastDirtyTimestamp after status changed to UP. But InstanceInfoReplicator thread records status as UP and same lastDirtyTimestamp and send request to eureka server first.

Then registration request of HeartbeatThread overwrites status with STARTING. In subsequent heartbeat request, even status is UP but lastDirtyTimestamp are same, and eureka server won't change instance status to UP due to logic in InstanceResource$validateDirtyTimestamp

Below tcp dump shows the flow described above.

  1. HeartbeatThread sends heartbeat request, but receive 404 (instance not register yet). then prepare to send registration request.
  2. InstanceInfoReplicator sends registration request. (status: UP, lastDirtyTimestamp: a)
  3. HeartbeatThread sends registration request. (status: STARTING, lastDirtyTimestamp: a)

image

addozhang commented 5 years ago

@troshko111 could we reopen this issue to discuss?

tjuchniewicz commented 5 years ago

I've just followed into the same issue... But this is probably Spring Cloud issue, not Eureka.

addozhang commented 5 years ago

@tjuchniewicz no, it’s eureka. Check my demonstration above. We fix it with workaround solution currently, but it’s better if Netflix can fix it.

tjuchniewicz commented 5 years ago

I agree with @addozhang. Please reopen

troshko111 commented 5 years ago

I understand the flow and agree this is a 100% bug (a race condition), the fix proposed does not eliminate it but only improves the odds. I'm going to merge it as it's harmless and should reduce the problem in the meantime. Thanks for putting in the effort @addozhang, appreciate it.

If someone wants to tackle the generic race condition here, feel free to discuss the proposed changes in this issue, I'm going to keep it open so that we track this startup bug.

abracadv8 commented 5 years ago

It seems like AbstractJerseyEurekaHttpClient$register() needs an atomic way of getting both the status and validateDirtyTimestamp.

Furthermore, it appears that the status updates update both the status and validateDirtyTimestamp atomically via synchronized InstanceStatus setStatus(InstanceStatus status), which calls setIsDirty().

I assume that the register() method only happens only once on startup and isn't repeatedly used?

If that's the case, is it possible to just synchronize the object and make use of the InstanceInfo copy-constructor or create a totally new clone() method to make a clone of it before its sent to the builder?

That way the status and timestamp can't be out of sync during the building step.

Something like:

register(InstanceInfo info) {
  InstanceInfo infoClone;
  synchronized (info) {
    infoClone = info.clone();
  }
  Builder resourceBuilder = jerseyClient.resource(serviceUrl).path(urlPath).getRequestBuilder();
  addExtraHeaders(resourceBuilder);
  response = resourceBuilder
   .header("Accept-Encoding", "gzip")
   .type(MediaType.APPLICATION_JSON_TYPE)
   .accept(MediaType.APPLICATION_JSON)
   .post(ClientResponse.class, infoClone);
  return anEurekaHttpResponse(response.getStatus()).headers(headersOf(response)).build();
}

edit 10/22

@troshko111 - Any thoughts on the above approach?

kong62 commented 4 years ago

we have same issue

rene-m-hernandez commented 4 years ago

I do not feel that a registering a health check should force an onDemandUpdate and I suspect that this contributes to the race condition: https://github.com/Netflix/eureka/blob/5c469f9098b7919865c63515d8be4b3d9f5b575f/eureka-client/src/main/java/com/netflix/discovery/DiscoveryClient.java#L686-L697

At the very least, if registring a health check is to trigger an onDemandUpdate, it should respect this property so that this behavior can be disabled: https://github.com/Netflix/eureka/blob/5c469f9098b7919865c63515d8be4b3d9f5b575f/eureka-client/src/main/java/com/netflix/discovery/EurekaClientConfig.java#L526-L533 Setting that property to false disables the StatusChangeListener which should technically resolve the problem as outlined in @addozhang analysis. But, if one registers a health check, the same race condition exists.

Otherwise, as I understand it, one should be able to set these properties: https://github.com/Netflix/eureka/blob/5c469f9098b7919865c63515d8be4b3d9f5b575f/eureka-client/src/main/java/com/netflix/discovery/EurekaClientConfig.java#L66-L78

to such a value that they do not cause a race condition with this value: https://github.com/Netflix/eureka/blob/5c469f9098b7919865c63515d8be4b3d9f5b575f/eureka-client/src/main/java/com/netflix/appinfo/LeaseInfo.java#L228-L235