Closed fabacab closed 8 years ago
The notion of annotating a trust overlay-graph, rather than building a trust network out of individual participants in the system -- essentially, a graph that incorporates notional-proxy "entities", the way that references work in ASL -- is a novel and interesting one! I'm going to need to think for a while about #18, but let me elaborate a bit about the others. (Also, I notice that "Reputation systems" is in the outline but not yet populated, so there's definitely some documentation coming on this front.)
The use case that inspired Trinity has to do with trust related to capabilities -- thus the observation that trust is not only of whom but about what. If any of these resources are IoT devices, for example, their sensors and actuators are things that can be "trusted about". WRT #81, this derives from the human notion of trust about capability-to-do-thing -- e.g., I trust you to make that phone call to me at the restaurant, so I trust you with the 'read' capability on my phone number -- and I can certainly see it generalising to the human-skillset thing you describe.
On a more immediate level, my thinking was that one big open problem in currently-deployed web of trust systems is revocation: what do you do when you can't trust someone with some capability anymore? It doesn't look like the docs on this are up yet either, but this strikes me as particularly relevant to your use case and it is a problem Trinity solves.
Apologies for all the not-yet-filled-in blanks, and thanks for your patience :)
Okay, at least this confirms that my understanding of what Trinity actually does isn't too far off the mark. And yes, I'm looking forward to its documentation being filling in a bit, but I'm not in a rush, so no worries.
Also, to clarify, yes, we are explicitly thinking about IoT devices as potential "resources," mostly for disaster-preparedness automation, though this is of course just one possible application of a "panic button" that can trigger an IoT device to do a thing. See #63 for an example. Even more pie-in-the-sky thinking here is that a user should be able to define their own "script" of things-that-should-happen when a given emergency occurs. Basically, imagine a Lua scripting environment for your panic room, but instead of a single room, this panic room is anywhere you control (or have an approved trust relationship with) an IoT device. My personal pet use case for this is triggering ALPRs to activate (perhaps by integrating with an OpenALPR back-end) when they detect the presence of a police cruiser, as this is one way to potentially alert residents to LEO raids.
This issue has been migrated to betterangels/buoy#63.
See http://p3ki.org/dwb/doku.php/blue/trinity/tech/fundamentals
The basic idea here is that tickets such as #18, #81, and #82, which all relate to supplying metadata either about other users or about inanimate resources (such as locations [shelters, food banks, etc.]), could be enhanced with "trust information" that, for each user, represents the degree of usefulness that user believes that resource to provide to them. Such metadata could conceivably become the basis of a distributed and pseudo-automated reputation system useful for various Buoy-related purposes, including:
Anyway, worth looking into as some far-future enhancement that also doesn't rely on a hierarchical trust model (like the TLS CA system, for instance).