Currently, the various storage backends already implement bulk_insert and bulk_update methods, which are being used when importing an archive. Using the existing bulk_update method, the following works to ~update~ replace the extras:
However, in the current implementation, node selection only works using the id (probably for efficiency reasons), and previous extras, e.g., _aiida_hash would be overwritten, so it doesn't function in the way of an extend feature, which is usually what one wants. One could add an additional, specific method for updating the node extras, or extend the current one. Though, care has to be taken to keep it efficient, and not iterate again over individual nodes in the implementation, possibly slowing down things.
As all other Node properties should be immutable once stored, I currently cannot think of other modifications than changing the extras which would be interesting in bulk for nodes.
As mentioned by @giovannipizzi, it would be nice if bulk-updating
Node
extras
could be achieved using, e.g., a dictionary of the form:Currently, the various storage backends already implement
bulk_insert
andbulk_update
methods, which are being used when importing an archive. Using the existingbulk_update
method, the following works to ~update~ replace the extras:However, in the current implementation, node selection only works using the
id
(probably for efficiency reasons), and previous extras, e.g.,_aiida_hash
would be overwritten, so it doesn't function in the way of an extend feature, which is usually what one wants. One could add an additional, specific method for updating the node extras, or extend the current one. Though, care has to be taken to keep it efficient, and not iterate again over individual nodes in the implementation, possibly slowing down things.As all other
Node
properties should be immutable once stored, I currently cannot think of other modifications than changing theextras
which would be interesting in bulk for nodes.