Open Mizaro opened 1 year ago
I am talking about this part in the code https://github.com/tsaikd/gogstash/blob/55de687242f94f1e50575a59fd0ca07fe52e7088/cmd/worker_unix.go#L52
I agree the inputs should be running in another go routine, mostly because I imagine that inputs should wait for incoming events and send them to the filtering channel for handling
Do you mean (1 process 1 goroutine 1 event queue) * N CPUs?
I mean 1 process, N goroutines that each has one event queue
Take a look at https://medium.com/smsjunk/handling-1-million-requests-per-minute-with-golang-f70ac505fcaa for inspiration.
How to ensure the order of messages between different event queues? (FIFO)
I don't use gogstash with workers. I run everyting in the same process. My heavy lifting involves around 150k messages per minute. This has worked for over a year without issues. This way GOMAXPROC defaults to the number of CPU's and when the load can be spread around processors it will. (My process above uses 4 CPU, utilizing max 30 %.) At some point you should scale out. (But for me that has not been an issue yet.)
The code from Medium involves handling long-running tasks (uploading to an S3 bucket), and in that case using workers would be a good way to go.
In gogstash this would only be an issue when a filter or an output have to spend some time on its work. (Like talking with remote systems etc.)
If you look into the elastic output you will see that it creates batches of events that it stores in memory and sends out asynchronously to the server in order to keep processing going. (I use this output a lot.)
As for @tsaikd worries about the order of events, that is not an issue for me. As I am not using workers, events are sent in order. If I ever need to scale out, the events will be out of order but that is handles in the destination.
For setting workers == numCPU as you asked about - isn't it good to have this control yourself if you ever choose to go from the standard everything-in-one-process?
How to ensure the order of messages between different event queues? (FIFO)
As far as I know, Logstash does not promise to handle events in a FIFO order. If it is a requirement here, we should consider it.
I don't use gogstash with workers. I run everyting in the same process. My heavy lifting involves around 150k messages per minute. This has worked for over a year without issues. This way GOMAXPROC defaults to the number of CPU's and when the load can be spread around processors it will. (My process above uses 4 CPU, utilizing max 30 %.) At some point you should scale out. (But for me that has not been an issue yet.)
Interesting. So, there is no need for the workers feature at all because go's runtime can utilize all of the CPUs when GOMAXPROC is its default value (NumCPU) (Ref https://pkg.go.dev/runtime#GOMAXPROCS).
The code from Medium involves handling long-running tasks (uploading to an S3 bucket), and in that case using workers would be a good way to go.
In gogstash this would only be an issue when a filter or an output have to spend some time on its work. (Like talking with remote systems etc.)
Actually, as far as I understand it, talking with remote systems will be considered as IO as well, just like uploading to a S3 bucket.
If you look into the elastic output you will see that it creates batches of events that it stores in memory and sends out asynchronously to the server in order to keep processing going. (I use this output a lot.)
As for @tsaikd worries about the order of events, that is not an issue for me. As I am not using workers, events are sent in order. If I ever need to scale out, the events will be out of order but that is handles in the destination.
For setting workers == numCPU as you asked about - isn't it good to have this control yourself if you ever choose to go from the standard everything-in-one-process?
As a matter of fact, I am asking because I want to have a ReplicaSet Deployment of this service in my Kubernetes. While using Kubernetes, it is not recommended to have more than a single process per container, to enable scaling by adding more containers (pods with single container in my example here).
Do you mean (1 process 1 goroutine 1 event queue) * N CPUs?
I mean 1 process, N goroutines that each has one event queue
1 process that contains (1 goroutine 1 event queue) * N CPUs.
By the way, I saw the code without workers and it used all 8 of my CPUs. Aa far as I know, this means that creating workers does not add any performance to the program.
With kubernetes you should the default one-process-model so that the pod can be respawn in case of a crash.
I think the only reason to use the worker-model is if you cannot easily scale out and your filters/outputs require time to complete.
In either case your inputs should support load-sharing. Also note that some inputs does not work well with workers (like http as it binds to a port). When I started with gogstash I tried using the http input, but quickly ran out of sockets/resources. I moved to using the nsq input that scales very well, and also works when you scale out (workers or ReplicaSet).
If you have everything else that you need with gogstash you have something that workes well without eating memory and CPU. (I started with logstash but it required too much resources for my load.)
With kubernetes you should the default one-process-model so that the pod can be respawn in case of a crash.
I think the only reason to use the worker-model is if you cannot easily scale out and your filters/outputs require time to complete.
What I am trying to tell here is that due to Go's runtime using GOMAXPROC, the Worker model is redundant here.
Some discussion on https://github.com/tsaikd/gogstash/pull/54
I am interested in using this project for a very intensive workload currently based on Logstash.
During my investigation of the code, I saw that the code spans worker processes to handle all events. I wonder if modifying GOMAXPROC to be equal to "worker: %d" and using %d go routines to process all events from a channel would make Gogstash more stable and more performant.
I would like to implement my suggestion if you are open to considering using it 🙏🏻