Closed weswalla closed 12 months ago
Thanks so much Wes! Great to be diving into this conversation.
If I'm interpreting all this correctly then what has been implemented so far is a sensible and serviceable pattern for a particular kind of interaction flow. But I think more fidelity is needed.
Now, given a methodEh, we need to know the input dimension (so that we can display the widget for generating an assessment along the input dimension) and the output dimension (so that we can display the value of generated assessment from the method - say, the "total likes")
I mostly disagree with this coupling to a Method. The work I mostly completed in #41 was focused on iterating the SensemakerStore
API to a place where data could be bound to components in a more granular fashion.
There are many use-cases for Widgets which don't do any Assessment creation:
READ
backwards over all your Assessments in this Dimension of the Resource until the most recent is found.READ
of the most recent SUM
value in an output Dimension. (The Method is kinda irrelevant.)READ
over all of everyone's Assessments in this Dimension of the Resource, first filtered and grouped by the Agent IDs of your friends and then the most recent for each friend returned.It would be an awkward shame if every one of these UI elements had to be bound to a methodEh
and receive some props for creating Assessments that it is never going to use. It would be even more awkward if these elements were obliged to render Assessment creation controls; because it'd mean weird things like repeated CREATE
controls when all you really want is to display the data in different ways, or being forced to implement all of the above as a single combined Widget as a workaround. Also, it might imply a need to create dummy Methods just to link output Dimensions so that Widgets pick up different things.
The 3 above examples all come from the same source data, but it's different output Dimensions being queried and displayed. We can't have this proposed 1:1 relationship between an input & output Dimension through a Method. Even if Methods work that way internally; we still need multiple Methods executing against the same input Dimension in order to derive multiple aggregate insights.
So I think we need Widgets broken down more. There should be:
resourceEh
& dimensionEh
as parameters.resourceEh
, dimensionEh
and optional methodEh[]
. I figure this is the easiest way to bind cascading computations and AssessDimensionWidget
implements this logic already. The widget should not do anything with the methodEh
s other than call SensemakerStore.runMethod
for each of them after any CREATE
operation (ignoring any errors). 'Dimension display Widgets' and 'Assessment creation Widgets' can update their state from any output Dimensions naturally through Observable data.'Dimension display Widgets' would be responsible for their own logic in interpreting the retrieved Assessment
data. Some may only use the most recent Assessment in the Dimension. Others would read all of them. Others still would perform their own filtering. I don't think we have to necessarily introduce verbiage to sub-type them but I do think well-named Widgets which handle generic cases are an important element in our core libraries. It's certainly not a core concern: AssessDimensionWidget
should not be tracking any latestAssessment
internally because that is a specific type of binding to a specific form of computation input / output.
All Widgets would still necessarily bind to the SensemakerStore
via props or (more likely) something compatible with @lit-labs/context
. (Do we forsee Widgets being bound to multiple SensemakerStore
s in future?) This actually gives them the freedom to make queries and pass dimensionEh
etc through however they like. Those Observables are the correct alternative to doing something like AssessDimensionWidget.latestAssessment
. The same applies to DisplayDimensionWidget.assessment
. Not every display Widget shows but a single Assessment.
I think all Widgets probably also want access to the currently authenticated Agent ID as well, so that they can filter Assessments on this basis (eg. "my like"). Maybe that is another expected prop or context to define in their interface.
IMO this separation away from Widgets & Methods towards direct bindings between Widgets & Dimensions gives a great deal more freedom and customisability to both Widget developers and CAs configuring Resource views.
The coupling in WidgetRegistry
seems like a related limiting factor. This structure is defined on the assumption that all Assessment Widgets pair their CREATE
controls with their READ
ones.
[dimensionEh: string]: {
display: typeof ConcreteDisplayDimensionWidget,
assess: typeof ConcreteAssessDimensionWidget,
}
In my view, there ought to be multiple of various display
and assess
Widgets available to any Dimension. If built idiomatically then they are never paired by developers. They are decomposed into individual units which can be paired if the CA decides to place them adjacent to each other.
given a methodEh, we need to be able to know the input and output dimensions for the widget that is bound to that method
I agree, but I think the configuration for this is more about assembling Dimensions and Widgets in a meaningful way that in some (but not all) cases will mirror the configuration of Methods. The user interface needs to be further decoupled from the computation. In the simple case, we can abstract over that with the configuration UI.
sensemake-resource
implies that there is only one configurable location in a Resource view for Assessment widgets to render in. I think this is highly limiting to the visual / UX design of a Resource type. I'm not opposed to having it as a higher-level helper component which knows how to fetch all the configured Dimensions for a Resource type and render all of those controls into an area. But I think there needs to be much more granularity than that. How does it even determine the rendering order?
In these cases I think the configuration could be something like:
WidgetConfig
sWidgetConfig
provides dimensionEh
; and for an 'Assessment creation Widget' any methodEh
s to trigger after CREATE
Actually, despite the length of this post I think we are pretty close to this. Hopefully this makes sense!
Overall, high-level feedback is that I agree with all you've outlined for longer-term directions. Though I imagine a simplified approach to widgets and widget configurability is more sensible in the near-term as we are spread relatively thin and there are other areas of the NH stack that we are trying to advance commensurately. Do note, however, that this perspective is grounded in my sense of the depth and complexity of this work, which feels like a big undertaking. If its clear and relatively straightforward for you, I'd be happy to review some proposed specs and examples!
I mostly disagree with this coupling to a Method. The work I mostly completed in https://github.com/neighbour-hoods/sensemaker-lite/pull/41 was focused on iterating the SensemakerStore API to a place where data could be bound to components in a more granular fashion.
Yep I agree that this is too limiting. It was done purely for demo purposes at dweb camp. Sorry for not making that clearer in the original comment. I'm in agreement for a more generic approach, though I was not able to design and implement such a system (as it seems to me quite non-trivial) in the timeframe of dweb camp.
There are many use-cases for Widgets which don't do any Assessment creation:
I really like these examples you've outlined. Makes me want to have a solid set of use-cases enumerated to derive the design/feature requirements for these composable components.
Even if Methods work that way internally; we still need multiple Methods executing against the same input Dimension in order to derive multiple aggregate insights.
On the note of triggering multiple method computations when an assessment is created along input dimensions: I wonder how to best divide this logic between the client and the zome code. For example, I can imagine some kind of "call-back" like logic in our zome: when an assessment is created along a dimension, some operation then checks for the existence of links which point to methods that are to be run. That way, we don't have to leave the cascading logic up to the client (and perhaps we could even have "depth" parameters, to prevent really long chains of method invokation). Though this is a slightly separate topic and deserves it's own issue.
- 'Dimension display Widgets', which take a resourceEh & dimensionEh as parameters.
I think there is more to expand on regarding the "display"/READ
widgets. In terms of controls as you outlined in the above examples. Things like:
'Dimension display Widgets' and 'Assessment creation Widgets' can update their state from any output Dimensions naturally through Observable data.
I also wonder to what degree we want to make these components fully dependent on the sensemaker store. The sensemaker store is a few steps removed from the data in the DHT and may contain broken/invalid states. How do we handle such cases? If a widget is displaying an older state and a user uses it to generate an assessment, would the user have done differently if they had a more accurate/up-to-date representation of relevant sensemaker state?
Maybe in other words, a possible approach for this is allowing widgets to directly query the sensemaker zome for their specific slice of data (manifested as a "refresh" type button), which would require a more granular zome API than we currently have. Thoughts on that?
It's certainly not a core concern: AssessDimensionWidget should not be tracking any latestAssessment internally because that is a specific type of binding to a specific form of computation input / output.
So would we give then scoped access to specific sensemaker zome calls?
Do we forsee Widgets being bound to multiple SensemakerStores in future?)
I don't think I understand why this would ever be a use-case. The widgets would be rendered in the context of a NH which only has one sensemaker.
The coupling in WidgetRegistry seems like a related limiting factor. This structure is defined on the assumption that all Assessment Widgets pair their CREATE controls with their READ ones.
I am more than happy to work with other implementation proposals! I genuinely was having a hard time thinking through all this stuff so mostly went with what I could cognitively manage not what I thought was the best for the long-term direction of NH tooling.
I'd love to have your thinking in the last few paragraph expanded into some examples/samples? I am definitely in alignment with that kind of thinking:
As a CA, I want to be able to bind specific Widgets to specific locations (eg. 'total likes' goes in 'post header metadata'; 'like this post' goes in 'post footer controls'). I want these Widgets to maintain the display order that I configure them in, if I choose to assign multiple of them to each slot.
On the note of triggering multiple method computations when an assessment is created along input dimensions: I wonder how to best divide this logic between the client and the zome code. For example, I can imagine some kind of "call-back" like logic in our zome: when an assessment is created along a dimension, some operation then checks for the existence of links which point to methods that are to be run. That way, we don't have to leave the cascading logic up to the client (and perhaps we could even have "depth" parameters, to prevent really long chains of method invokation).
There are interesting things we could potentially do with this. The ideal situation for me would be that the UI components simply listen to streams of data. If SensemakerStore
can introspect as to whether some UI control is listening for updates to a Method's output Dimension; then it may also be able to infer whether a new Assessment
it has received from a peer in some input dimension means that the data in an observed output Dimension may be stale and requires recomputation of the Method binding them.
Anyways yes big and curious topic feel free to split out...
manifested as a "refresh" type button
100%. There's a design principle of "slow data" at play for me here where, sure, you can be informed about things going on elsewhere in the network but you don't have to deal with them immediately and maybe having the UI push new stuff in your face all the time is actually kinda counterproductive for a lot of attention-deficit folks.
[bind specific Widgets to specific locations] expanded into some examples/samples?
Yeah, I'm not sure what this ends up looking like. It may be that we can actually do it within the Context layer, by "composing over" Resource view CustomElements with "Sensemaker-augmented" CustomElements that do the configuration loading & Dimension Widget display and rendering.
I was looking at it from a perspective of allowing UI designers to explicitly define the visual areas of configurability bound to the display of a particular Resource Type. This would mean CustomElements with <slot>
s which have semantically meaningful names particular to the Resource Type, and some means of configuring those slots with Dimension Widgets in the Launcher Dashboard. So, 'total likes' goes in 'post header metadata'.
(Aside: I think eventually we'll want to be composing Resource views inside of each other so we can do 'this comment attached to this meme' so maybe a little forward consideration to that is useful now?)
As an alternative it may also be reasonable to think of the configurability of these visual areas more generically. If Resource views are at the leaves of the component tree and Dimension Widgets are added around them, then our configuration looks more like block layout management. You can put "widgets underneath the Resource view", maybe options for "widgets above", "left widget pane" and so on. Maybe you can compose these 'Assessment widget drawer organisers' with multiple sets and that configuration gets stored in the Context; such that "widgets underneath the Resource view" + "left widget menu" can be combined (with different but complementary sets of widgets) around visual representation of the Resource data.
Does this feel limiting? How much design cohesion and detailed layout configurability are we likely to need at the moment? Does this sufficiently account for 'weirder looking' Resource Types? Popout menus, control overlays, stacked rendering, ...?
(CC @Kasimirsuterwinter @herramienta-digital)
The widgets would be rendered in the context of a NH which only has one sensemaker.
I might want to overlay multiple Sensemaking lenses from different organisations onto a shared collaboration space to understand (eg.) different organisational perspectives on a co-delivered project. In this case I am hoping eventually it would just be a matter of configuring a new AppId which references existing Cells & CustomElement data.
Adding this here for context, though we've chatted through it: https://hackmd.io/@adaburrows/HklvEnk02/edit
A discussion around @pospi inquiry:
The current approach is as follows:
@neighbourhoods/client
defines aSensemakeResource
class which is to be used as a custom elementsensemake-resource
, which takes 2 paramters:resourceEh
andresourceDefEh
.We then have a stateful property
_activeMethod
, which maps a resourceDefEh to a methodEh: https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/sensemakerStore.ts#L29-L31 This is what gets updated in the current "wizard". When configuring which widget to use for a given resource eh, this is the stateful property that is being updated in the sensemaker store.This was done for simplicity for dweb, so that there could be at least some kind of customization at the resource def level. I think ideally what we would have is for any given resource def, an active dimension along which it is to be assessed, and then for that dimension, a list of possible widgets that are compatible with generating a value within the range of such dimension. Something like:
resourceDef -> active dimensions -> widgets compatible with active dimensions
Though, in the long run this seems relatively rudimentary to me, and I imagine a more complex UX will be required to enable a more parallel assessment experience. I assume that computational insights come from when the many resources have been assessed along the same set of dimensions (e.g. importance, relevance, clarity... etc.)
Now, given a methodEh, we need to know the input dimension (so that we can display the widget for generating an assessment along the input dimension) and the output dimension (so that we can display the value of generated assessment from the method - say, the "total likes"). There is currently a binding here between two widgets: the one to create a user-generated assessment, and the one to display the value of the output of a method (so we can have real-time indicators of the "totals" and other aggregate forms of assessments). Which is why in the
NeighbourhoodApplet
interface I called themwidgetPairs
: https://github.com/neighbour-hoods/nh-launcher/blob/e08517d326becc5d6d591198b6d2a1c3ec1e6b55/ui/libs/nh-launcher-applet/src/index.ts#L38C8-L42assess
is the widget for generating an assessment anddisplay
is for displaying the output assessment of a method (likely the most recent one, too).These pairs get registered in the
widgetRegistry
: https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/sensemakerStore.ts#L27https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/applet.ts#L52-L57
Which is currently bound to a specific dimension hash. When registering a widget pair, we pass in multiple compatible dimensions: https://github.com/neighbour-hoods/nh-launcher/blob/e08517d326becc5d6d591198b6d2a1c3ec1e6b55/ui/app/src/matrix-store.ts#L1377-L1381
The main reason for this was simply so that for a given assessment in the dashboard, these widgets could be rendered for assessments along either the input or output dimensions (though we probably want to reconsider this). For example in https://github.com/neighbour-hoods/nh-launcher/blob/e08517d326becc5d6d591198b6d2a1c3ec1e6b55/ui/app/src/elements/components/table-filter-map.ts#L295-L297
As mentioned earlier, given a methodEh, we need to be able to know the input and output dimensions for the widget that is bound to that method, which is where
_methodDimensionMapping
comes in. https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/sensemakerStore.ts#L33C48-L33C48 https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/applet.ts#L59-L64Back to
sensemake-resource
, it will simply fetch the widget pair from the widget registry for the given resource def, instantiate these objects and render them in the objects template literal: https://github.com/neighbour-hoods/sensemaker-lite/blob/b61ba8fe788b491defbb7e92632750cce10e3fbd/client/src/widgets/sensemake-resource.ts#L26-L53sensemake-resource
is meant to wrap around the resource component of an applet, like here: https://github.com/neighbour-hoods/todo-applet/blob/7a01153635e8707ca73a341de722aa7fb9580d79/ui/src/components/task-list.ts#L73-L82