h-REA / hREA

A ValueFlows / REA economic network coordination system implemented on Holochain and with supplied Javascript GraphQL libraries
https://docs.hrea.io
Other
142 stars 15 forks source link

read all units query failing to resolve #330

Closed weswalla closed 2 years ago

weswalla commented 2 years ago

calling read_all_units after 2 units have been created returns the following error:

12:32:42 [Tryorama - Local Conductor] debug: Jul 05 12:32:42.149 ERROR wasm_trace: hc_zome_rea_unit::__get_unit_extern:zomes/rea_unit/zome/src/lib.rs:46 output_type = "core::result::Result<hc_zome_rea_unit_rpc::ResponseData, holochain_wasmer_common::result::WasmError>"; bytes = [129, 167, 97, 100, 100, 114, 101, 115, 115, 146, 196, 39, 132, 45, 36, 190, 119, 116, 103, 117, 249, 48, 150, 239, 14, 152, 31, 49, 165, 118, 13, 56, 153, 204, 159, 34, 162, 33, 207, 44, 233, 33, 10, 141, 99, 89, 237, 64, 160, 219, 5, 196, 39, 132, 33, 36, 159, 23, 169, 177, 203, 189, 125, 252, 120, 149, 241, 239, 150, 144, 239, 239, 37, 99, 219, 138, 78, 214, 61, 71, 133, 32, 74, 137, 241, 104, 125, 115, 82, 168, 31, 246]; Deserialize("missing field `id`")

[result]:  {
  page_info: {
    startCursor: '0',
    endCursor: '0',
    hasPreviousPage: true,
    hasNextPage: true
  },
  edges: [],
  errors: [
    {
      Guest: 'Error in remote call Host("Wasm error while working with Ribosome: Deserialize([129, 167, 97, 100, 100, 114, 101, 115, 115, 146, 196, 39, 132, 45, 36, 190, 119, 116, 103, 117, 249, 48, 150, 239, 14, 152, 31, 49, 165, 118, 13, 56, 153, 204, 159, 34, 162, 33, 207, 44, 233, 33, 10, 141, 99, 89, 237, 64, 160, 219, 5, 196, 39, 132, 33, 36, 125, 86, 237, 154, 208, 77, 226, 3, 68, 48, 1, 159, 0, 186, 192, 87, 213, 48, 233, 237, 138, 100, 13, 17, 88, 79, 134, 171, 32, 216, 87, 13, 165, 103, 222, 174])")'
    },
    {
      Guest: 'Error in remote call Host("Wasm error while working with Ribosome: Deserialize([129, 167, 97, 100, 100, 114, 101, 115, 115, 146, 196, 39, 132, 45, 36, 190, 119, 116, 103, 117, 249, 48, 150, 239, 14, 152, 31, 49, 165, 118, 13, 56, 153, 204, 159, 34, 162, 33, 207, 44, 233, 33, 10, 141, 99, 89, 237, 64, 160, 219, 5, 196, 39, 132, 33, 36, 159, 23, 169, 177, 203, 189, 125, 252, 120, 149, 241, 239, 150, 144, 239, 239, 37, 99, 219, 138, 78, 214, 61, 71, 133, 32, 74, 137, 241, 104, 125, 115, 82, 168, 31, 246])")'
    }
  ]
}
pospi commented 2 years ago

This might be a core data architecture problem with the indexing library since all its index pointers are based on DnaAddressable and Unit is unique in that it's externally identified by a DnaIdentifiable instead (see hdk_uuid_types).

I'll have a look through and see if there's an easy fix. If not the workaround will be to add a separate API endpoint for querying them by the (in this case internal) UnitAddress rather than UnitId.

fosterlynn commented 2 years ago

A note on units, in case it helps. If not, please ignore. I think the way units will work both now and long term is that groups will choose from the big list (now manually, later in an app of some sort). The chosen units will get loaded into the group's zome/dna/dht. In other words, there wouldn't be ongoing hits to the big unit list, and it would be like any other data. (I could be wrong of course, I'm no holochain architect! Please correct me if I am wrong!)

pospi commented 2 years ago

@fosterlynn it's actually likely that having the big shared list would be a better architectural pattern in Holochain. The reason being that this creates a larger set of peers holding the DHT, and that means better availability and faster sync speed for globally-shared datasets such as units of measurement.

fosterlynn commented 2 years ago

The reason being that this creates a larger set of peers holding the DHT, and that means better availability and faster sync speed for globally-shared datasets such as units of measurement.

Thanks, understood. We do need a way for each network to know or have its own list of units though. So if there is a holochain-wide list kept in holochain itself that is referenced for the data, we'll need to add something for that. If we need to think more about it, let's start a new issue, sorry for taking this on OT.

[edit: I have extracted the units from OM2 for the REA Playspace work, and started some kind of classification for easier search, when that is needed. Current work is here.]

pospi commented 1 year ago

I haven't really decided yet tbh @fosterlynn. But we have an architecture that's flexible enough to handle all cases.

The ideal seems like well-known units would be referenced from a globally-shared DHT, and collectives may additionally include their own units in a group-local DHT. By the time we're done we will be able to stitch together as many different DHTs of the same data as we like, and treat them all as single datastores.