jhuckaby / Cronicle

A simple, distributed task scheduler and runner with a web based UI.
http://cronicle.net
Other
3.71k stars 382 forks source link

Worker server not automatically assigned in server group #483

Open henriquegranatto opened 2 years ago

henriquegranatto commented 2 years ago

Summary

I'm configuring Cronicle to work with multiple servers. The master server is able to recognize workers, but workers are not being automatically assigned in the group.

image

Steps to reproduce the problem

Config JSON Docker Compose YAML Dockerfile

OBS: All files are in .txt as github does not allow uploading some in their original format

Your Setup

Operating system and version?

Node.js version?

Cronicle software version?

Are you using a multi-server setup, or just a single server?

Are you using the filesystem as back-end storage, or S3/Couchbase?

jhuckaby commented 2 years ago

Alas, Cronicle doesn't support automatic adding of workers that it detects in the nearby LAN. They all have to be manually added.

That highlighted worker item you see in your list is not yet a real server in the cluster. It is merely "detected" nearby using UDP broadcast packets. It still has to be added by a user, i.e. you need to click that link on the far right that says "Add Server".

Sorry for the confusion over this. The nearby server detection code has been completely removed in Cronicle 2.0 (Orchestra) because it causes so much confusion, and doesn't work in any public clouds either.

henriquegranatto commented 2 years ago

Ah I see...thanks for the super quick response!

Being a bit annoying, but just to confirm: in version 2.0 there will also be no support (at least nothing planned for now) for clustering of self-discovered and 'self-assigned' servers, is that it?

We will still be able to work with multiple servers, but the entire process will be manual.

jhuckaby commented 2 years ago

Being a bit annoying, but just to confirm: in version 2.0 there will also be no support (at least nothing planned for now) for clustering of self-discovered and 'self-assigned' servers, is that it?

Oh sorry, I should have elaborated here. It's actually a good thing that we're removing the old flawed UDP discovery system, because Cronicle 2.0 has a completely redesigned system that "automatically" adds worker servers when they start up. Basically, the worker itself contacts the master, they negotiate and the server is added to the cluster.

You can specify the master server hostname and secret key in a config file on the new worker, or specify them on the command-line. But either way, this allows you to automate the process, and it doesn't require anyone to click any buttons in the UI to add new servers.

I hope that makes sense.

jhuckaby commented 2 years ago

(Ugh, I need to stop saying "master". I meant primary! Sorry, old habits.)

henriquegranatto commented 2 years ago

Look great! Now I'm looking forward to version 2.0 hehehe

Thanks for the clarifications!

jkrenge commented 2 years ago

I do have a setup of auto-scaling workers and therefore the servers constantly change, but jobs need to be run from those worker instances. I do have a static single master server.

Is there any (hacky) workaround that let's me register workers through code?

For example calling a private API to register a worker from a CLI script? If I see it correctly:

curl --request POST 'http://<ip>:3012/api/user/login' \
  --header 'Content-Type: text/plain' \
  --data-raw '{"username":"<username_with_admin_rights>","password":"<password>"}'

And then if (response.code === 0), I can use the response.session_id in the next call:

curl --location --request POST 'http://<ip>:3012/api/app/add_server' \
  --header 'Content-Type: text/plain' \
  --data-raw '{"hostname":"<my-new-host-ip>","session_id":"<response.session_id>"}'

This does fill up my server list though, so at some point I'd probably have to log in, iterate over all servers in the response, pick worker nodes with the longest uptime and only keep the most recent n ones, where n is the minimum size of my auto-scaling group.

(I actually don't need all nodes in my ASG to run cronicle jobs as well, but I need to colocate the cron execution with my instances.)

~I do use S3 as storage, so could have the current node add itself to s3://.../global/servers/0.json, but this seems error-prone and does require a restart of the server if I noticed correctly?~ That's just a bad idea.

jhuckaby commented 2 years ago

See: https://github.com/jhuckaby/Cronicle/discussions/479