theRAPTLab / gsgo

GEM-STEP Foundation repo migrated from GitLab June 2023
1 stars 1 forks source link

Annotation Design #1

Open benloh opened 4 years ago

benloh commented 4 years ago

Joshua writes:

We have been thinking about what we’d need / want for t he “annotation” layer / functionality. And, we decided that rather than have a fully separate “tool” as we did in the prior iPad app for STEP, we’d like to instead have a “mode” or “role” that applies to the iPad app. The basic idea is as follows, with some added terminology that we have developed (feel free to correct it if you have existing terms):

We assume there are two basic interfaces: 1) Modeling interface, where you setup the model, edit the code, and project, running on a laptop, and 2) the agent interface, previously known as “FakeTrack” which currently will run on an iPad and let a student (who has logged in via a token) control one or more agents within the model that is running in the modeling interface. We had previously assumed you might somehow switch tabs to have access to the buttons or paint, or whatever, but that seemed awkward.

The new idea is that the user can instead make a choice within the agent interface to either be in “active” mode or “annotation” mode. If they are in active mode, they are controlling an agent that appears within the shared model that is projected in the modeling interface. If, instead, they are in “annotation” mode, they are controlling an agent that does not impact the model by default. Rather, they see it running locally on top of the shared model.

The cool thing is that the person controlling the modeling interface will ALSO have a list of layers that reflect students who have logged in and entered “annotation mode.” If that person selects one, it becomes visible.

So, I could be controlling a simple star agent in annotation mode. Literally just a sticker of a star that moves around to highlight what I think is important. Only I see it. However, if you turn on my layer, the projection now shows my star to the whole class. If Noel’s layer is turned on you can now see his star and my star. Either way, they get recorded, but do not impact the “active” agents in the systems and these layers can be turned off and on later.

The thinking is that if we make the agents robust enough, then we no-longer need a full annotation tool / tab, but just clever use of pre-made agents in annotation mode. For example, if we assume a new primitive function is “draw on background” then you can use an agent consisting of a pencil image that uses the paint functionality to annotate. And then huzzah! If you want to draw lines as part of your model, do it in active mode. If you want it to just be a “layer” that turns on and off, do it in annotation mode.

In a sense, we are also thinking of this as two distinct name-spaces, so that the annotations display over the model, but do not interact with it and vice versa. Also, annotation agents only interact with other visible annotation agents. So, for example, let’s say that we wanted to have kids annotate by dropping stickers where they think something is hot. We might also create an agent that counts those stickers and put it in annotation mode. If you make it visible, that counter displays. If you make my layer visible, the counter will count my stickers. Make Noel’s layer visible, and now we see the combined total of the stickers. Turn mine off again and you just see Noel’s, etc. Either way, this is the same basic counting functionality that we imagined using for basic agents and widgets so I am hoping that it means we have already made it. And either way we need to solve some technical / UI issues about making sure that you can set properties of the agent that you are controlling whether on the modeling interface or agent interface. But now that work helps us in both modes and not just in active mode.

A few key goals of this idea are:

1) Simplify the interface - instead of making lots of custom tools you just have to worry about one set (agents) and a mode. If the agent interface is robust enough it will just work across modes. 2) Assume that in most cases, we can build the annotation agents we want and share them via the shared library. Let’s you focus on making high powered widgets and primitives and lets us (and future teachers) easily share agents via the library / import / export function. So we might make a “simple” pen agent, and a “complex” pen agent where you can set the color and shape, and then selectively put those into models we are working on depending on what we want the kids to be able to do, and either way you don’t need to worry about those details so long as we have a nice setup for adding proper-ties to agents and making them available via the library, modeling interface, and agent interface. 3) Develop a pattern / culture of thinking of modeling as a verb, and not as a noun (model). That is, we want kids to be thinking constantly about showing information and having it relate to other ideas they have incorporated in their model along with their own embodiment, and then modify both. We think by treating this as “agents all the way down” it increases the chance of them getting in that mindset and not thinking of it as a fully different set of tools. At least, that’s our hope. 4) If we need something really complicated, we can still ask you to build it at some point as a custom widget similar to the graphs. But then it’ll already be set to

Joshua's Whimsical Design

benloh commented 4 years ago

I think I get the gist of what you're looking for.  I think one concern I have is making sure it's easy for the kids to grok and manage all the different states/modes/actions their ipad might be used for.  e.g. when it's time for everyone to add stars, how do you get kids to do it? Do they all individually have to go get a star agent and set their ipad in annotation mode?  Is this a 5 second operation or does it take 5 minutes to get all the kids connected and using the right tool.  This has always been an issue with the system and I don't think we've ever really handled it all that well.  But in general I like the integration of the tools so that we're not building separate tools and modes for every type of annotation.

I'm wanting to call them "ghost" agents to convey both their invisibility and their inability to modify the world.

If some agents are ghost agents that can only interact with other ghost agents, then maybe there's no need for a separate annotation vs active mode. If you're using a ghost agent, you're effectively always in annotation mode. If you're using a non-ghost agent, you're in active mode. In general, ghost agents can read parameters from the real/active world, so counters and such can be made, but they can't affect the real world agents.

You can then show/hide ghost agents on the main display. And do so on a student by student or whole group / class basis.

benloh commented 4 years ago

In GitLab by @daveseah on Aug 13, 2020, 07:59

Thanks Joshua! Here's my tech breakdown of what I think is being suggested. Will followup afterwards.

Shift in GEM-STEP Thinking

distilled from Joshua's original email

SUMMARY OF SHIFT

This affects the functionality of the annotation system. Instead of having a separate annotation tool on the iPad with various controls, make it instead into two modes: "Active Mode" for interacting with model agents, "Annotation Mode" for non-model annotation (display information) agents.

ORIGINAL THINKING

Two roles: laptop and ipad:

  1. laptop: hosts "modeling interface" - setup model, edit code, and project
  2. ipads: hosts "agent interface" - login via token, control one or more agents within the model running on (1). It was assumed that ipad app has "tabs" or some control to switch between annotations. This seems awkward.

THE NEW IDEA

Instead of multimodal tabs, student selects either "ACTIVE" or "ANNOTATION" mode.

Furthermore:

EXAMPLES of ANNOTATION AGENTS

QUESTIONS:

GOALS

A few key goals of this idea were:

  1. simplify the interface
  2. leverage high-powered widgets/primitives, and agent templates can be shared with library / import / export function
  3. modeling as a verb, not as a noun. "We want kids to be thinking constantly about showing information and having it relate to other ideas they have incorporated in their model along with their along with their own embodiment, and then modify both. In other words, treat this as agents all the way down to increase the change of them getting into the mindset.
  4. if anything complicated needed, teachers ask dev team for a custom "widget" to provide for script use.
benloh commented 4 years ago

In GitLab by @jdanish on Aug 13, 2020, 08:50

Thanks Sri! Some clarifications:

One, we are thinking of two interfaces and two modes. We also imagine that the modes determine a sort of namespace / interaction space. Details below:

Interfaces:

  1. Modeling: This is where you create and edit the model, as well as select which annotations to display. This is going to have to run on a laptop.
  2. Agent Interface: This is where you control one agent at a time.

Modes:

  1. Active mode: Any agent in active mode impacts the entire model and is visible to everyone.
  2. Annotation mode: Any agent in annotation mode can be displayed but only impacts other annotation agents (though maybe they can "read" active agents.

Visibility:

Thus each kid might see only the active agents and their annotations locally.

benloh commented 4 years ago

In GitLab by @jdanish on Aug 13, 2020, 08:55

Sorry, re-reading your comment I think this was in fact clear to you. But I'll leave it in case it helps, and will answer the other questions once my meeting is done.

benloh commented 4 years ago

In GitLab by @jdanish on Aug 13, 2020, 09:58

QUESTIONS:

benloh commented 4 years ago

In GitLab by @daveseah on Aug 13, 2020, 10:04

Thanks for checking! Let me map this out just to be sure because this is a three-dimensional array of possibilities and I am terrible at managing more than two in my head at once:

ui+mode laptop ipad
model UI + active mode YES N/A
model UI + annotation mode N/A N/A
agent UI + active mode YES YES
agent UI + annotation mode yes? YES
modeling UI : Create and edit the mode. 
              Select annotations to display.
agent UI    : Control one agent at a time.
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
active mode : Only in agent UI. affects model.
annote mode : Only in agent UI. doesn't affect model.
              Overlays single annotation agent on model.

Looking at the table I made, I'm realizing that my "model UI" does not take the modeling activity stages into account:

  1. model UI for agent definition and setting initial parameters. [making Agent Template]
  2. model UI for agent placement (and parameter overrides) before starting the simulation. [making Agent Instances from Templates]
  3. agent UI for model simulation while running the simulation. [run the model in simulation step]
  4. agent UI for model playback while reviewing the simulation. [replay the recorded session]

Implication: there are system states that corresponds to:

For the iPads, I imagine that there is an implicit "agent definition" and "agent placement", but the available features depends on which stage of modeling activities 1-4 is active.

Would that imply that for ipads, both active/annotation modes exist for model UI?

benloh commented 4 years ago

In GitLab by @jdanish on Aug 13, 2020, 11:47

Trying to make sure I get all of this. Hopefully, my wireframe also helps clarify.

In my proposal, any agent can be put in active or annotation mode, allowing the model interface to also include annotations. I think this would most likely only happen in cases where the modeling is using an annotation aggregator of some sort, but might as well be flexible. Otherwise, the first table looks right to me.

Also, yes, we have been assuming that the agent interface runs just fine on the laptop except for any input that is related to tablet gestures and the like. So you can drag things around with the mouse but can't "shake".

I am still thinking through the details, but my current thinking via the wireframe is that the modeling interface has two tabs (plus home to handle logistics and model picking):

  1. Build: This is where the agents get added, programmed, etc. For convenience, they can also be added to the stage / modeling window here.
  2. Model: This is where the programmed agents get used. This has 3 possible states really:
    1. Default: Where you can arrange what is in the model, set starting modes, etc. Tracked objects and annotations are visible at this time, just no other code is running.
    2. Record: The model runs, code is activated. Some editing options are disabled.
    3. Playback: of the most recent recording. We might want to allow saving and recording for long-term use, but based on tinkering with STEP the bulk of the work we'll do is with the most recent run so to keep things simple let's do that in the interface and have likely some less obvious button for saving / loading.

Also note that I have incorporated basic debugging into the record or playback states via the option to view the properties of one agent instance. The rest we imagine happens via widgets (add a graph or a number display, etc) or visible properties.

I believe those cover the 4 stages you mentioned.

So I think we are on the same page except to say that I imagine that the "mode" of an agent is a bit more fluid and is part of just "figuring things out." Also, mostly handled socially. So I might use the model interface to say that an active agent and an annotation agent are available to the iPads. Then any given kid can move between them in the Model / Default (edit?) mode, and then "use" them during the record mode.

Honestly, I want to think through that list bit a bit more before committing fully, but I think you can see the general direction.

benloh commented 4 years ago

In GitLab by @daveseah on Aug 13, 2020, 13:28

cool! I will work from this description and your wireframes so we can more easily talk through stuff soon!

benloh commented 4 years ago

A little late to the game...some additional thoughts:

  1. Wireframe using actual model -- One of the next steps we should take is to do some wireframing with a candidate model. That'll help highlight points of interaction between all the components.

  2. Setting the Stage vs Debugging -- Where do you go to define the base model? Is that just the Run Model screen? And if you need to test your agent against other agents in order to debug, is that also done on the Run Model screen? And is that a shared space across the whole group? Or does each individual have their own debug space? I can imagine a situation where you want to spend some time staging your final model, but also need to play in a sandbox area where you can try out different arrangements. You might want to go back and forth between them.

  3. Group Agents by Control -- It might be cool to be able to drag an agent into an area to set its control. So imagine you have a tracker control area, a bot/self control area, and a user control area. Your list of agents is split between the three. If you want to change who controls the bot, just drag them from one to other. This way at a glance you can tell who's being controlled by what. Otherwise figuring out which agent is being controlled by what is a tedious one-by-one inspection process. Running in annotation vs active mode can be handled the same way.

I realize some of this might be out of left field and may not match your use model, but here's some sketching I did just to play with ideas.

GEM-STEP-4

benloh commented 4 years ago

In GitLab by @jdanish on Aug 18, 2020, 08:02

General comments:

Thanks for moving this forward! First, a quick comment on Sri's awesome first pass: I imagined that each agent would have a "mode" both on the modeling interface and in each agent interface (iPad). I think your mockup treats it as one of each, maybe?

Ben, I am not 100% sure what you mean by testing an agent against another. I had sort of imagined that you made your agent and then just started to run things. If you only want one fish and one algae, then you only activate those by a mix of setting their mode and only having that many people control them. So you might tell the kids to only have 1 student walk into the space, or only one uses the iPad interface to help debug.

I like the inspector idea, though I am unsure how the inspector in this sample is tied to a specific agent? That's why I went with the assumption that you either debug via visible propertiyes (the speech bubble I assume) some kind of print statement (maybe a log widget or different temporary speech bubbble) or you can pick one agent at a time per device. So I might display the fish, but each iPad can have a different fish.

benloh commented 4 years ago

In GitLab by @jdanish on Aug 18, 2020, 08:03

Some thoughts on workflow:

I like the visual design of the left panel, but I fear it might be too limited and not match the imagined workflow. So, here is a first stab.

  1. Agents are added to the model. Let's say fish, algae, and bugs.
  2. The class for each of those shows up in the panel (in my design).
  3. The teacher who is running the modeling interface selects a control
  4. The iPads won't initially see anything except the empty background. (Side note - I think I am leaning towards renaming the iPad interface as the collaboration interface instead of agent since it is misleading). The default mode for each on the modeling interface is "active" and the default control is "script." Which means nothing will happen if you run it.
  5. The teacher changes the mode for fish to "tracking." Now, if anyone walks into the space, they become a fish. (Note, I haven't yet figured out how we handle 2 types of fish the way we did with hot and cold wands, but we can come back to that). Maybe we have way an agent can change what it is by passing over a different agent. So you might have a tracking dot, and then a "make hot wand" and "make cold wand" agent that converts you...
  6. Now, the iPads see the fish moving around. They can't really do much more than watch, though. Maybe if they tap one, they see it's properties locally.
  7. If the teacher wanted to edit the code, they'd need to switch to "edit" tab, which would temporarily shut down tracking.
  8. The teacher now identifies that the bugs should be controlled by kids with iPads and sets their control mode to "collaborator". Their "mode" is now blanked out since they are controlled on iPads not here. Again, it might be nice to allow both but not simply maybe?
  9. Each student sees a bug on their iPad in active mode. If they click on it and then into their simulation window they can move a bug around and everyone sees it.
  10. The teacher now adds a "sticker" agent to the system and sets it to iPad control.
  11. Now the students with the iPad see both a bug and a sticker and can choose one to be active on their iPad.
  12. Sri chooses the Sticker. Once it is selected, Sri sees that they can change the mode and switch it to annotation. They also see a number of properties. One is color and has a drop-down and is set to red. They click on the window and see a red sticker over the pond. The projected modeling interface does not.
  13. Sri now clicks on the color changer and makes it blue. Sri then clicks the pond again and there is a blue sticker, now as well as the red.
  14. The teacher asks if anyone is seeing something interesting. Sri raises their hand, so the teacher checks the "Sri" line under annotations and can now see a red sticker where Ben's fish had eaten all the food. And a blue sticker where it had gone later.

I know I am missing some corner cases, but off the top of my head that is some of the basics to prompt questions and get the ball rolling.

Perhaps the annotation and active agents have slightly different control panels / debug panels (I imagine the same thing can serve both purposes). In active mode, the assumption is you only have one agent, so displaying its properties and being able to set them is great.

When you create the fish, for example, you might indicate that color is user-controllable, but energy is not. So the property inspector will show both on the iPad, with energy only going up when the script reacts to a fish being over algae, but the color being something they can just switch anytime.

Maybe the sticker solution is that rather than it be the sticker agent it is actually a sticker pad agent with a color property and each time you click it, it creates a sticker agent in the window.

benloh commented 4 years ago

This is very helpful. I started to draft something like this here to get the conversation started and begin to formalize things and pull out key elements of the interaction: https://docs.google.com/document/d/12hwORxuB17iJm4P72NtA95WsZHZUkXx5Rs-M1dHHlYQ/edit?usp=sharing

You should feel free to edit and comment. It obviously doeesn't take into account the workflow you just posted.

Re "testing an agent against another" I was thinking that if you're working on your own, it would be helpful to be able to test your agent against a single agent by yourself, rather than including the whole group or necessarily taking over the whole group's computer.

But this raises a question about how the devices are handed out.

In your writeup it sounds like there really is only one laptop per group? And the teacher controls that laptop? So the kids are not coding agents on their own? This is more of a group activity?

Re inspectors, it's usually necessary to be able to see the values of two agents while debugging to see how they're interacting with each other. So the rough idea there was the thought bubble could be attached/shown on some agents, and perhaps two big inspector panels could be opened on two select agents. Perhaps using a colored outline, or even a rubber band.

Dropping the Whimsical link above for reference.

benloh commented 4 years ago

In GitLab by @jdanish on Aug 18, 2020, 08:40

Cool.

I think we get the best of both worlds if we assume something akin to my panel for controlling mode but some easy way of creating property panels (inspectors) for agents as needed. And a way to hide the panel when you don’t need to change things so that you have more space.

I think we are assuming one (or more) laptops per group. Initially we assume the teacher will be running it, but long-term we want groups of kids to be able to run a model and edit it themselves.

We mostly want to avoid “working alone” but it would be annoying to require two devices to be able to do anything productive. But if you can drag agents around and let them interact as you do, I guess that let’s you test things just fine, and then a second device or window let’s you setup a true collaboration with the ability to control more than one.

Have you seen this doc? This is where I started doing what It think you are doing. Maybe we should merge and go from there? I’m happy to start with yours to keep things clean, but let me know.

https://docs.google.com/document/d/1mS7jd6fc3DbMiLMyzdyNx5mZMRuyT_goIQQ5x9FjSt0/edit https://docs.google.com/document/d/1mS7jd6fc3DbMiLMyzdyNx5mZMRuyT_goIQQ5x9FjSt0/edit

Let me know what you think / what’s best in terms of next-steps.

Joshua

benloh commented 4 years ago

Ah! I don't believe I've seen this, or at least not this recent iteration.

I'm not sure what the best next step would be. I think we definitely want to tease out the key design aspects, in terms of interactions, and principles, and specific tool ideas. I get the impression there are many ideas in there. Do they conflict with each other at all? I also get the impression that you guys have some kind of shared understanding of how the tool works that may or may not be reflected in doc?

Maybe a good next step would be to use your gdoc as the fodder for writing a new one that is more sequentially and conceptually based (ala my basic structure)? With perhaps examples of interactions, or goals, or models taken from your draft?

We'd probably want to keep it relatively high level. If there are details of a particular feature that need to be worked out, we might break it out in to a separate doc.

I think it would be rather difficult to design from your draft in its current state because we'd have to look in multiple places to get any kind of perspective on a particular feature.

Is that something you can do?

benloh commented 4 years ago

In GitLab by @jdanish on Aug 18, 2020, 09:52

I can do that. I am not 100% sure at a glance that I know what you expect for each type of information / section. Would a brief chat make sense? Or I can take a stab tonight and go from there.

Joshua