Closed weswalla closed 2 years ago
This might be a core data architecture problem with the indexing library since all its index pointers are based on DnaAddressable
and Unit is unique in that it's externally identified by a DnaIdentifiable
instead (see hdk_uuid_types
).
I'll have a look through and see if there's an easy fix. If not the workaround will be to add a separate API endpoint for querying them by the (in this case internal) UnitAddress
rather than UnitId
.
A note on units, in case it helps. If not, please ignore. I think the way units will work both now and long term is that groups will choose from the big list (now manually, later in an app of some sort). The chosen units will get loaded into the group's zome/dna/dht. In other words, there wouldn't be ongoing hits to the big unit list, and it would be like any other data. (I could be wrong of course, I'm no holochain architect! Please correct me if I am wrong!)
@fosterlynn it's actually likely that having the big shared list would be a better architectural pattern in Holochain. The reason being that this creates a larger set of peers holding the DHT, and that means better availability and faster sync speed for globally-shared datasets such as units of measurement.
The reason being that this creates a larger set of peers holding the DHT, and that means better availability and faster sync speed for globally-shared datasets such as units of measurement.
Thanks, understood. We do need a way for each network to know or have its own list of units though. So if there is a holochain-wide list kept in holochain itself that is referenced for the data, we'll need to add something for that. If we need to think more about it, let's start a new issue, sorry for taking this on OT.
[edit: I have extracted the units from OM2 for the REA Playspace work, and started some kind of classification for easier search, when that is needed. Current work is here.]
I haven't really decided yet tbh @fosterlynn. But we have an architecture that's flexible enough to handle all cases.
The ideal seems like well-known units would be referenced from a globally-shared DHT, and collectives may additionally include their own units in a group-local DHT. By the time we're done we will be able to stitch together as many different DHTs of the same data as we like, and treat them all as single datastores.
calling
read_all_units
after 2 units have been created returns the following error: