stellar / supercluster

Stellar-core integration test automation tool
Other
9 stars 16 forks source link

Parallel catchup V2 #178

Closed jayz22 closed 3 months ago

jayz22 commented 4 months ago

What

Resolves https://github.com/stellar/stellar-core-internal/issues/264, by adding:

To try the new mission, run this command : $ dotnet run mission HistoryPubnetParallelCatchupV2 --image=docker-registry.services.stellar-ops.com/dev/stellar-core:latest --pubnet-parallel-catchup-num-workers=6 --pubnet-parallel-catchup-starting-ledger=52500000 on your local machine (you need to first have access to an Kubernetes cluster, e.g. via sshuttle to the ssc)

Why

The current parallel catchup mission frequently suffers from instability (e.g. see https://github.com/stellar/stellar-supercluster/issues/280 and https://github.com/stellar/supercluster/pull/150). The main reason is we are managing the lifecycle of pods, managing the job queue and performing status monitoring all inside the mission itself (outside Kubernetes cluster, by constantly querying Kubernetes API. There are many potential points of failure (e.g. a pod fails, a pod cannot be allocated, the K8s API times out) each would result in a total mission failure.

By moving all of the core logic inside Kubernetes (leveraging k8s's rich resource and workload management capabilities) this design is more resilient to any single point of failure. It also eliminates virtually all API queries, by moving the status checking query on the job monitor HTTP endpoint (a single HTTP (not API) query every 30 secs), thus eliminating a main source of unreliability.

Besides, there are a few other improvements/benefits:

Check out the Design doc for more details

jayz22 commented 4 months ago

I did a round of addressing comments, will continue addressing the rest of them tomorrow. Meanwhile I've started another production workload run, will see if the recent improvements resulted in more timing improvements.

sisuresh commented 4 months ago

A few nits noted in comments above, nothing critical.

Looking at the finished product .. I also just remain a little hesitant about this whole thing: the configuration layering is such that you've now got at least 3 layers of nested untyped and not-especially-debuggable languages (The helm templating system, then YAML, then the final bunch of bourne shell / python scripts and sed / awk / redis command / whatever strings).

This is kinda what we were dealing with in the pre-supercluster days (back then it was ruby, some other templating system, and dockerfiles) and one of the motives for consolidating all the logic in supercluster's F# code: there are real types, real variables your IDE and compiler can find and complain about mismatches between, you're not as likely to have to keep re-running the whole job just to find typos and nested-quoting mistakes.

I guess if the F# and API-driven approach is sufficiently alien-seeming, relative to the templates-of-YAML-driven style of Kubernetes configuration that's standard in ops environments, then .. I won't stop you. But I guess I just want to double check with you, @jacekn, @sisuresh, @marta-lokhova and anyone else who's been hacking supercluster for a while: we're all ok continuing on this path, having spent some time on it and looking critically at the result?

(This is not a total objection, just a slight hesitation; in general I think you did great work here and it's exactly what you said you were going to build and I agreed was an experiment worth conducting! I am just wondering how you and others feel about the result. I'm a bit on the fence. If it seems it might be preferable to several folks?)

I think what Jay has here works well for jobs like parallel catchup where we had very little client side logic (that client side logic was mainly just orchestration too). I think F# should still be used for all of our other missions for ease of development and debugging, but if this V2 pubnet parallel catchup improves the reliability of the existing mission, then I think the tradeoff is worth it.

jayz22 commented 4 months ago

I've pushed a few changes that should address all the outstanding comments. Besides what's already mentioned in the comments, here are a few main improvements since the last review:

There were a couple of experiments:

A few nits noted in comments above, nothing critical.

Looking at the finished product .. I also just remain a little hesitant about this whole thing: the configuration layering is such that you've now got at least 3 layers of nested untyped and not-especially-debuggable languages (The helm templating system, then YAML, then the final bunch of bourne shell / python scripts and sed / awk / redis command / whatever strings).

This is kinda what we were dealing with in the pre-supercluster days (back then it was ruby, some other templating system, and dockerfiles) and one of the motives for consolidating all the logic in supercluster's F# code: there are real types, real variables your IDE and compiler can find and complain about mismatches between, you're not as likely to have to keep re-running the whole job just to find typos and nested-quoting mistakes.

I guess if the F# and API-driven approach is sufficiently alien-seeming, relative to the templates-of-YAML-driven style of Kubernetes configuration that's standard in ops environments, then .. I won't stop you. But I guess I just want to double check with you, @jacekn, @sisuresh, @marta-lokhova and anyone else who's been hacking supercluster for a while: we're all ok continuing on this path, having spent some time on it and looking critically at the result?

(This is not a total objection, just a slight hesitation; in general I think you did great work here and it's exactly what you said you were going to build and I agreed was an experiment worth conducting! I am just wondering how you and others feel about the result. I'm a bit on the fence. If it seems it might be preferable to several folks?)

@graydon to address your comment. In my opinion the benefits of the new approach (I listed out some of them on top) out-weights the cons you mentioned. The parallel catchup mission is a fairly straightforward and static setup, and the Helm/Yaml approach made the structure clear and more visual. All of the container logics involving shell scripts are fairly small and manageable (including the worker logic).

The extra layer of configuration setting can appear a little intimidating at first. But some of this is because we wanted to have a single point of entry for all supercluster missions. I'd like to point out, there is an extra layer which is the Jenkins job, so the overall pipeline is: Jenkins --> (dotnet) parallel-catchup mission --> Helm config --> pod configs. But we don't really need the dotnet layer (we need it because of single point of entry with other missions), as Jenkins can directly config Helm and launch project, so in the future (if we want) we might spin out the parallel catchup mission to run completely standalone inside Kubernetes (which wouldn't be possible with other missions), Jenkins can directly configure it via the Helm command line and that shall simply the overall flow a lot.

For other more intricate setups (such as experimenting with topology), F# programmability still offers tremendous benefits and likely can't be replaced.