Open dgutson opened 1 year ago
Hey @dgutson. Sorry for the late response. I don't see any problem with using &
and send stdout
to /dev/null
. But yeah it's possible to make a feature to send it to background. I will work on it when I can find some time.
I found a workaround as setting TS_MAXCONN to large numbers (eg 10000), but I don't know if it is correct and how to set it unlimited.
TS_MAXCONN
is to limit the number of unfinished jobs. Basically the max value depends on ulimit
for open file descriptor. Setting it to a large number is fine as you can't queue too many jobs due to ulimit
anw.
My idea is that queued jobs don't consume any resources until they are run, that's my rationale about unlimited queued stuff. Imagine I have several hundred thousand files to process during a weekend, I should be able to enqueue all of them quietly waiting to be ran at some point when there's a free slot. Why just them being waiting should consume any resource at all, for example an open file descriptor?
In order to queue, the client (your job) needs to contact the server, which opens a socket. The max number of sockets is limited by TS_MAXCONN, and also ulimit. You can search for unix domains protocol for more details. So yes, you do open a lot of files just by queuing a lot of jobs.
Maybe I'm thinking in localhost only, where what I enqueue is just the shell command, which is a text at the end of the day.
How can I enqueue without locking?
For example
ts "/usr/bin/cat /dev/random > /dev/null"
Maybe it might be because I have the queue "full"?
My usecase is that I need to enqueue 3k tasks, and I want the enqueueing process to be asynchronous. I assume that just using & to launch them in background is a hack and it should be a better way.
If this is not possible, then my feature request is to add a flag to ensure that the enqueueing will return immediately.