Closed lajonataf closed 4 years ago
Correct, bulk writes are currently not supported. The module would need some kind of debounce mechanism that collects all queries to the service within a certain timeframe and then issues the bulk request.
This is an interesting idea. I have been working on a couple of things similar to what daffle mentioned about a "debounce mechanism that queues things" throughout the lifecycle of a request for service level transactions. I will see if I can maybe convert that code to give you an example of how it could work here. If I don't get back to this later this week, ping me here and remind me.
Thanks for the clarification. Though I'm not entirely sure why debounce is needed - why not give the caller the responsibility of batching their calls themselves? I don't know if it fits the feathers structure, but if someone needs it for the sake of performance, let them organise things as they want. It'll probably be better than having a generic solution for all use cases. In the mean time I'll use feathers-batch - hopefully the performance would be good enough for our needs.
If the user is supposed to handle it you might also just create your own bulk write custom service that calls the operations on the collection along the lines of
class MessageBulkService {
create (data) {
return app.service('messages').Model.bulkWrite(data);
}
}
It's up to you to sanitize the data accordingly of course.
In hindsight, I am a bit confused as well. Why not just use something like
app.service('my-service').Model.bulkWrite(data)
Edit Daffle beat me to it.
@DaddyWarbucks I've done this kind of batching quite a bunch for some internal modules and been thinking of using it at least in an up-to-date version of feathers-batch. So simultaneous service calls would all be batched together accordingly.
Thats a cool idea @daffl . Wouldn't that couple feathers-batch to MongoDB though?
This is probably TMI for this issue...but I have already typed it so here it is My brain has been ticking lately on how to use something like feathers-batchloader (aka Facebook's Dataloader) for more things than just "joining" records. The memo/batch/cache idea that it exposes is certainly performant, but equally as important it offers a nice API. There is some potential there, but it requires things to be in the same tick so that may kill the idea. My brain has also been ticking on some more basic memoization through the req lifecycle too, for example https://daddywarbucks.github.io/feathers-fletching/hooks.html#stashable
The service level transaction thing I mentioned may be of some interest. At the core it is really just syntactic sugar for a way of "collecting" service calls and their results throughout the request lifecycle, and what you do with those calls/results is ultimately up to you. It still needs some polish I think, but I will eventually add it to feathers-fletching. I'd be happy to share it if you want.
If the user is supposed to handle it you might also just create your own bulk write custom service that calls the operations on the collection along the lines of It's up to you to sanitize the data accordingly of course.
I was hoping there was already something that did the heavy lifting of connecting everything to the normal feathers workflow, like hooks. We use the hooks pretty extensively, and batching like that will bypass it.
But now that I think of it, I might be able to do something like that. So thanks again.
Hey, I've been looking and I can't seem to find a way to perform bulk write. I saw issue 123 which was the same, but that was closed by PR 131. However, I still can't see a way to perform multiple patches in a single call to different records with different values and different queries. There's feathers-batch, but this still doesn't eliminate the roundtrip between feathers and mongo itself for each record. Am I missing something, or is this not supported?
Thanks.