geneontology / noctua

Graph-based modeling environment for biology, including prototype editor and services
http://noctua.geneontology.org/
BSD 3-Clause "New" or "Revised" License
36 stars 13 forks source link

Minerva should be able to handle operations on multiple models within a single batch request #112

Open kltm opened 9 years ago

kltm commented 9 years ago

With models now being handled with get/add/remove operations just like anything else (and with the decision to /not/ move model_id to the batch request level, partially because of this discussion), we can now see that there would be non-trivial use in being able to reference multiple different models within a single batch request.

A use case might be:

Another might be:

As part of this Minerva will have to disambiguate referenced IDs between models. One option might be to create multiple batch requests behind the scenes by sorting the original batch request into new requests by model ID; another option might be to namespace the IDs.

Similarly, as part of this ticket, the response handling of the client libraries will need to be tweaked. to handle responses dealing with possibly multiple models/metas. Mostly a nesting issue.

As the majority of use cases involve working on one model at a time right now, this is not a high priority. However, it might be good to lay the groundwork now, especially since things are in a bit of flux.

This is from a Friday discussion with @hdietze .

kltm commented 8 years ago

A little talk with @hdietze about this. [Always Fridays. -Ed.] There is a fundamental problem here: if I update multiple models, I should get multiple models returned; these models may have different signals and intents, which would all have to be collected (a model may show up multiple times in a batch request, and therefor response), interpreted (is the signal merge or rebuild?), and distributed (everybody should not get the entire results of a batch). This all means that large amounts of the response and barista infrastructure would have to be reconsidered, all for a use case that currently does not really exist.

That said, for certain non-editorial bulk model-level and meta operations, we may want to consider a different API, but this probably should not be done at this low a level. Even then id would be food to announce to all users that their models were deprecated. Maybe a general notification system (to announce reload or force reload) rather than shovelling models (although a force reload would do that a second later).

vanaukenk commented 4 years ago

@kltm - should we leave this ticket open?

kltm commented 4 years ago

Yes, this is something that is roadmapped that will be important for "knitted" models and might have consequences for batch operations for things like ART (theoretically).

tmushayahama commented 4 years ago

This would be awesome. So far on that prototype I am doing one independent requests per model in no particular order. However because of the asynchronous nature, if one or more models fail for some reasons, I continue with the operation. So if it is one minerva transaction, then it will be cool because if one thing fail, everything fails (which is expected)

Tagging @kltm @vanaukenk @lpalbou @ukemi @goodb

kltm commented 4 years ago

As this stands for this ticket, the batched request and response would still be separate by model--undo/redo would still be per model and there would be no "transaction" here.

vanaukenk commented 2 years ago

@kltm - should we keep this ticket open?

kltm commented 2 years ago

@vanaukenk If there are still plans to do the "knitted" models sooner (~year-ish) vs later (~years, when we have a new architecture in place), I might keep this open.