Closed brandur closed 1 week ago
@bgentry Mind taking a look at this and letting me know what you think? I definitely found it a little on the tricky side to guarantee we make the right decision to start or not start the new producer based on whether the client is started because it's hard to know for sure whether a client is started or stopped because it may be in the process of stopping or stopping right at this instant (therefore the addition of the new mutex).
I almost think a better design would be to never start a newly added producer/queue, and then require that the caller stop and then start the client again to have it start. However, that would cause some user confusion/surprise I'm sure.
@PumpkinSeed BTW, I changed a fair bit of code, but I included all your original commits in this variant.
We're trying a new CLA process (similar to the CLA Assistant you may have seen on any projects) just to make sure we don't have any copyright issues on any of any contributed code. Would you mind helping us test it out by going to https://github.com/riverqueue/cla and following the steps there? It basically involves opening a PR through GitHub's UI and adding your name to a CSV file. Thanks!
@bgentry Just since I'm sending 0.9.0 out for a bug fix, I thought I'd ping this one once more. If you have concerns, no worries, but if it's just that you haven't reviewed yet, may be worth including it in the release.
@bgentry Updated. Mind taking another look?
thx.
A feature to make it possible to add a new queue using an invocation like
client.Queues().Add(name, config)
, similar to the API for adding a new periodic job. If the client is started already, the new producer is also started. Fetch and work context are the same ones created for other producers duringStart
.For now we totally punt on the problem of removing queues, which is more complicated because it's a fairly hard problem on how producer stop context cancellation should work.
A problem I was having while writing a stress test for the feature is that test signal channels were being filled which was difficult to solve because test signals were always initialized by
newTestClient
and because the init also initializes each maintenance service's test signals in turn, it's not possible to deinit them again. I had to refactor tests such that test signals are only initialized when they're used, which is probably better anyway.