Closed holkexyz closed 1 year ago
I think it could be a good fit, it's probably very close to how we would want to do it ourselves. Their flexibility in off-chain and on-chain attestations is a nice perk.
Main thing we would need to decide is the schema for evaluations that we'll use. @holkeb has mentioned that we want to be able to both
Yes, the way I'm thinking about it is that it is very intuitive that people want to submit an evaluation for a single hypercert. However, it should also be possible to submit an evaluation of something where no hypercert has claimed that impact/work yet.
In the latter case, we would need to store the same dimensions as the hypercert (with some dimensions like the set of contributors empty) together with the evaluation. (see davidads email on Quantifying Biodiversity)
In the first case we can do exactly the same, i.e. if a user wants to evaluate hypercerts with claimID XXX, then the evaluation stores the hypercert dimensions of the hypercert with claimID XXX.
That way all evaluations live in and can be located in the same impact space as the hypercerts themselves.
This is also important as the hypercert with claimID XXX could later be merged or splitted. With this approach the evaluation still points to parts of the new hypercert(s).
Is the Ethereum Attestation Service (https://attest.sh/) sufficient for the evaluation part of hypercerts?
I've watched the video here: https://www.youtube.com/watch?v=ZWybaJLAkFA - attestations, schema, data structures.
While I'm not against Ethereum Attestation Service (seems like a reasonable way to structure data) I do not see how does it solve evaluation.
Related discussion: https://github.com/hypercerts-org/hypercerts/discussions/449
A really cool resource: http://web.archive.org/web/20230307121254/https://impactgenome.org/impact-index/
I think this can scale. Kleros jurors are experts with nitpicking and finding holes, professional internet trolls.
hey @marsrobertson we're going to try to work on this a bit, agree it doesn't by itself solve evaluation. my take from working in risk is you separate the feed of signal from the model that predicts risk/no risk (or impact/no impact). And you have to have a training set of high effort evaluations to really know how to score; but the first stab can be a graph of attestations with some trusted roots.
Coming from the ceramic world i'm thinking a stream in composedb is a nice scalable way to write them (its free) and i 100% agree with kleros jurors for disputes - possibly to provide the high effort training set!
Thanks for the thoughts! I totally agree that a simple attestation protocol is not going to fully solve building a system for scalable evaluations. I'd encourage us move the conversation to this GitHub discussion, https://github.com/hypercerts-org/hypercerts/discussions/449
I'd love to keep GitHub issues around concrete tasks and I think we were mostly assessing the viability of EAS as a simple attestation mechanism. Going to close this issue for now in our triage. thanks again all for the great discussions!
Just for reference, a link to the Optimism deployment https://community.optimism.io/docs/identity/atst-v1/
Is the Ethereum Attestation Service (https://attest.sh/) sufficient for the evaluation part of hypercerts? Should we reach out and discuss?