Open benloh opened 4 years ago
I think I get the gist of what you're looking for. I think one concern I have is making sure it's easy for the kids to grok and manage all the different states/modes/actions their ipad might be used for. e.g. when it's time for everyone to add stars, how do you get kids to do it? Do they all individually have to go get a star agent and set their ipad in annotation mode? Is this a 5 second operation or does it take 5 minutes to get all the kids connected and using the right tool. This has always been an issue with the system and I don't think we've ever really handled it all that well. But in general I like the integration of the tools so that we're not building separate tools and modes for every type of annotation.
I'm wanting to call them "ghost" agents to convey both their invisibility and their inability to modify the world.
If some agents are ghost agents that can only interact with other ghost agents, then maybe there's no need for a separate annotation vs active mode. If you're using a ghost agent, you're effectively always in annotation mode. If you're using a non-ghost agent, you're in active mode. In general, ghost agents can read parameters from the real/active world, so counters and such can be made, but they can't affect the real world agents.
You can then show/hide ghost agents on the main display. And do so on a student by student or whole group / class basis.
In GitLab by @daveseah on Aug 13, 2020, 07:59
Thanks Joshua! Here's my tech breakdown of what I think is being suggested. Will followup afterwards.
distilled from Joshua's original email
This affects the functionality of the annotation system. Instead of having a separate annotation tool on the iPad with various controls, make it instead into two modes: "Active Mode" for interacting with model agents, "Annotation Mode" for non-model annotation (display information) agents.
Two roles: laptop and ipad:
Instead of multimodal tabs, student selects either "ACTIVE" or "ANNOTATION" mode.
Furthermore:
A few key goals of this idea were:
In GitLab by @jdanish on Aug 13, 2020, 08:50
Thanks Sri! Some clarifications:
One, we are thinking of two interfaces and two modes. We also imagine that the modes determine a sort of namespace / interaction space. Details below:
Interfaces:
Modes:
Visibility:
Thus each kid might see only the active agents and their annotations locally.
In GitLab by @jdanish on Aug 13, 2020, 08:55
Sorry, re-reading your comment I think this was in fact clear to you. But I'll leave it in case it helps, and will answer the other questions once my meeting is done.
In GitLab by @jdanish on Aug 13, 2020, 09:58
QUESTION: do ipads also have ability to turn on/off annotation layers from other students?
ANSWER: The current thinking was no. They just see what the modeling environment makes available to keep things simple. We might want to revisit so I wouldn't do anything that prohibits this, but for now we are assuming that the iPad displays whatever is on the central modeling interface + their local work.
QUESTION: is annotation mode limited to SINGLE AGENT?
ANSWER: No. In fact we imagine we will require multiple agents to do some of the more interesting things such as having an annotation marker and an annotation marker counter.
QUESTION:modeling as verb: doing things, seeing reactions, being embodied in it?
modeling as noun: a diagram you look at passively, not reason/act with?
ANSWER:I think this is right. We also want to move away from the idea that you basically make a simulation and then submit it, but make it more about using the stuff we have made to ask and answer questions and then continue modeling. So the answer to "what effects the number of fish that can survive in this pond?" Is not "look, we coded a thing that shows it" but rather, ok, if w e all spread out, how does that change things? ... what if we change this code? .... OK, now let's clump together, but mark each time we eat so we can see if there is a pattern that emerges. Etc.
QUESTION: aside: drawing lines in model mode?
ANSWER: I don't see a reason not to, if we can make it work. Original Logo programming had a lot of drawing a line and checking the color under the agent kinds of things.
QUESTION: aside: we will have to prioritize list of "high powered" agents because each of them are potentially difficult to develop, but I think the list will wait until we have initial feedback from an interface.
ANSWER: Agreed. And the hope is that if we do this right we can develop many of the high powered agents using the scripting environment and patience :)
In GitLab by @daveseah on Aug 13, 2020, 10:04
Thanks for checking! Let me map this out just to be sure because this is a three-dimensional array of possibilities and I am terrible at managing more than two in my head at once:
ui+mode | laptop | ipad |
---|---|---|
model UI + active mode | YES | N/A |
model UI + annotation mode | N/A | N/A |
agent UI + active mode | YES | YES |
agent UI + annotation mode | yes? | YES |
modeling UI : Create and edit the mode.
Select annotations to display.
agent UI : Control one agent at a time.
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
active mode : Only in agent UI. affects model.
annote mode : Only in agent UI. doesn't affect model.
Overlays single annotation agent on model.
Looking at the table I made, I'm realizing that my "model UI" does not take the modeling activity stages into account:
Implication: there are system states that corresponds to:
For the iPads, I imagine that there is an implicit "agent definition" and "agent placement", but the available features depends on which stage of modeling activities 1-4 is active.
Would that imply that for ipads, both active/annotation modes exist for model UI?
In GitLab by @jdanish on Aug 13, 2020, 11:47
Trying to make sure I get all of this. Hopefully, my wireframe also helps clarify.
In my proposal, any agent can be put in active or annotation mode, allowing the model interface to also include annotations. I think this would most likely only happen in cases where the modeling is using an annotation aggregator of some sort, but might as well be flexible. Otherwise, the first table looks right to me.
Also, yes, we have been assuming that the agent interface runs just fine on the laptop except for any input that is related to tablet gestures and the like. So you can drag things around with the mouse but can't "shake".
I am still thinking through the details, but my current thinking via the wireframe is that the modeling interface has two tabs (plus home to handle logistics and model picking):
Also note that I have incorporated basic debugging into the record or playback states via the option to view the properties of one agent instance. The rest we imagine happens via widgets (add a graph or a number display, etc) or visible properties.
I believe those cover the 4 stages you mentioned.
So I think we are on the same page except to say that I imagine that the "mode" of an agent is a bit more fluid and is part of just "figuring things out." Also, mostly handled socially. So I might use the model interface to say that an active agent and an annotation agent are available to the iPads. Then any given kid can move between them in the Model / Default (edit?) mode, and then "use" them during the record mode.
Honestly, I want to think through that list bit a bit more before committing fully, but I think you can see the general direction.
In GitLab by @daveseah on Aug 13, 2020, 13:28
cool! I will work from this description and your wireframes so we can more easily talk through stuff soon!
A little late to the game...some additional thoughts:
Wireframe using actual model -- One of the next steps we should take is to do some wireframing with a candidate model. That'll help highlight points of interaction between all the components.
Setting the Stage vs Debugging -- Where do you go to define the base model? Is that just the Run Model screen? And if you need to test your agent against other agents in order to debug, is that also done on the Run Model screen? And is that a shared space across the whole group? Or does each individual have their own debug space? I can imagine a situation where you want to spend some time staging your final model, but also need to play in a sandbox area where you can try out different arrangements. You might want to go back and forth between them.
Group Agents by Control -- It might be cool to be able to drag an agent into an area to set its control. So imagine you have a tracker control area, a bot/self control area, and a user control area. Your list of agents is split between the three. If you want to change who controls the bot, just drag them from one to other. This way at a glance you can tell who's being controlled by what. Otherwise figuring out which agent is being controlled by what is a tedious one-by-one inspection process. Running in annotation vs active mode can be handled the same way.
I realize some of this might be out of left field and may not match your use model, but here's some sketching I did just to play with ideas.
In GitLab by @jdanish on Aug 18, 2020, 08:02
General comments:
Thanks for moving this forward! First, a quick comment on Sri's awesome first pass: I imagined that each agent would have a "mode" both on the modeling interface and in each agent interface (iPad). I think your mockup treats it as one of each, maybe?
Ben, I am not 100% sure what you mean by testing an agent against another. I had sort of imagined that you made your agent and then just started to run things. If you only want one fish and one algae, then you only activate those by a mix of setting their mode and only having that many people control them. So you might tell the kids to only have 1 student walk into the space, or only one uses the iPad interface to help debug.
I like the inspector idea, though I am unsure how the inspector in this sample is tied to a specific agent? That's why I went with the assumption that you either debug via visible propertiyes (the speech bubble I assume) some kind of print statement (maybe a log widget or different temporary speech bubbble) or you can pick one agent at a time per device. So I might display the fish, but each iPad can have a different fish.
In GitLab by @jdanish on Aug 18, 2020, 08:03
Some thoughts on workflow:
I like the visual design of the left panel, but I fear it might be too limited and not match the imagined workflow. So, here is a first stab.
I know I am missing some corner cases, but off the top of my head that is some of the basics to prompt questions and get the ball rolling.
Perhaps the annotation and active agents have slightly different control panels / debug panels (I imagine the same thing can serve both purposes). In active mode, the assumption is you only have one agent, so displaying its properties and being able to set them is great.
When you create the fish, for example, you might indicate that color is user-controllable, but energy is not. So the property inspector will show both on the iPad, with energy only going up when the script reacts to a fish being over algae, but the color being something they can just switch anytime.
Maybe the sticker solution is that rather than it be the sticker agent it is actually a sticker pad agent with a color property and each time you click it, it creates a sticker agent in the window.
This is very helpful. I started to draft something like this here to get the conversation started and begin to formalize things and pull out key elements of the interaction: https://docs.google.com/document/d/12hwORxuB17iJm4P72NtA95WsZHZUkXx5Rs-M1dHHlYQ/edit?usp=sharing
You should feel free to edit and comment. It obviously doeesn't take into account the workflow you just posted.
Re "testing an agent against another" I was thinking that if you're working on your own, it would be helpful to be able to test your agent against a single agent by yourself, rather than including the whole group or necessarily taking over the whole group's computer.
But this raises a question about how the devices are handed out.
In your writeup it sounds like there really is only one laptop per group? And the teacher controls that laptop? So the kids are not coding agents on their own? This is more of a group activity?
Re inspectors, it's usually necessary to be able to see the values of two agents while debugging to see how they're interacting with each other. So the rough idea there was the thought bubble could be attached/shown on some agents, and perhaps two big inspector panels could be opened on two select agents. Perhaps using a colored outline, or even a rubber band.
Dropping the Whimsical link above for reference.
In GitLab by @jdanish on Aug 18, 2020, 08:40
Cool.
I think we get the best of both worlds if we assume something akin to my panel for controlling mode but some easy way of creating property panels (inspectors) for agents as needed. And a way to hide the panel when you don’t need to change things so that you have more space.
I think we are assuming one (or more) laptops per group. Initially we assume the teacher will be running it, but long-term we want groups of kids to be able to run a model and edit it themselves.
We mostly want to avoid “working alone” but it would be annoying to require two devices to be able to do anything productive. But if you can drag agents around and let them interact as you do, I guess that let’s you test things just fine, and then a second device or window let’s you setup a true collaboration with the ability to control more than one.
Have you seen this doc? This is where I started doing what It think you are doing. Maybe we should merge and go from there? I’m happy to start with yours to keep things clean, but let me know.
https://docs.google.com/document/d/1mS7jd6fc3DbMiLMyzdyNx5mZMRuyT_goIQQ5x9FjSt0/edit https://docs.google.com/document/d/1mS7jd6fc3DbMiLMyzdyNx5mZMRuyT_goIQQ5x9FjSt0/edit
Let me know what you think / what’s best in terms of next-steps.
Joshua
Ah! I don't believe I've seen this, or at least not this recent iteration.
I'm not sure what the best next step would be. I think we definitely want to tease out the key design aspects, in terms of interactions, and principles, and specific tool ideas. I get the impression there are many ideas in there. Do they conflict with each other at all? I also get the impression that you guys have some kind of shared understanding of how the tool works that may or may not be reflected in doc?
Maybe a good next step would be to use your gdoc as the fodder for writing a new one that is more sequentially and conceptually based (ala my basic structure)? With perhaps examples of interactions, or goals, or models taken from your draft?
We'd probably want to keep it relatively high level. If there are details of a particular feature that need to be worked out, we might break it out in to a separate doc.
I think it would be rather difficult to design from your draft in its current state because we'd have to look in multiple places to get any kind of perspective on a particular feature.
Is that something you can do?
In GitLab by @jdanish on Aug 18, 2020, 09:52
I can do that. I am not 100% sure at a glance that I know what you expect for each type of information / section. Would a brief chat make sense? Or I can take a stab tonight and go from there.
Joshua
Joshua writes:
We have been thinking about what we’d need / want for t he “annotation” layer / functionality. And, we decided that rather than have a fully separate “tool” as we did in the prior iPad app for STEP, we’d like to instead have a “mode” or “role” that applies to the iPad app. The basic idea is as follows, with some added terminology that we have developed (feel free to correct it if you have existing terms):
We assume there are two basic interfaces: 1) Modeling interface, where you setup the model, edit the code, and project, running on a laptop, and 2) the agent interface, previously known as “FakeTrack” which currently will run on an iPad and let a student (who has logged in via a token) control one or more agents within the model that is running in the modeling interface. We had previously assumed you might somehow switch tabs to have access to the buttons or paint, or whatever, but that seemed awkward.
The new idea is that the user can instead make a choice within the agent interface to either be in “active” mode or “annotation” mode. If they are in active mode, they are controlling an agent that appears within the shared model that is projected in the modeling interface. If, instead, they are in “annotation” mode, they are controlling an agent that does not impact the model by default. Rather, they see it running locally on top of the shared model.
The cool thing is that the person controlling the modeling interface will ALSO have a list of layers that reflect students who have logged in and entered “annotation mode.” If that person selects one, it becomes visible.
So, I could be controlling a simple star agent in annotation mode. Literally just a sticker of a star that moves around to highlight what I think is important. Only I see it. However, if you turn on my layer, the projection now shows my star to the whole class. If Noel’s layer is turned on you can now see his star and my star. Either way, they get recorded, but do not impact the “active” agents in the systems and these layers can be turned off and on later.
The thinking is that if we make the agents robust enough, then we no-longer need a full annotation tool / tab, but just clever use of pre-made agents in annotation mode. For example, if we assume a new primitive function is “draw on background” then you can use an agent consisting of a pencil image that uses the paint functionality to annotate. And then huzzah! If you want to draw lines as part of your model, do it in active mode. If you want it to just be a “layer” that turns on and off, do it in annotation mode.
In a sense, we are also thinking of this as two distinct name-spaces, so that the annotations display over the model, but do not interact with it and vice versa. Also, annotation agents only interact with other visible annotation agents. So, for example, let’s say that we wanted to have kids annotate by dropping stickers where they think something is hot. We might also create an agent that counts those stickers and put it in annotation mode. If you make it visible, that counter displays. If you make my layer visible, the counter will count my stickers. Make Noel’s layer visible, and now we see the combined total of the stickers. Turn mine off again and you just see Noel’s, etc. Either way, this is the same basic counting functionality that we imagined using for basic agents and widgets so I am hoping that it means we have already made it. And either way we need to solve some technical / UI issues about making sure that you can set properties of the agent that you are controlling whether on the modeling interface or agent interface. But now that work helps us in both modes and not just in active mode.
A few key goals of this idea are:
1) Simplify the interface - instead of making lots of custom tools you just have to worry about one set (agents) and a mode. If the agent interface is robust enough it will just work across modes. 2) Assume that in most cases, we can build the annotation agents we want and share them via the shared library. Let’s you focus on making high powered widgets and primitives and lets us (and future teachers) easily share agents via the library / import / export function. So we might make a “simple” pen agent, and a “complex” pen agent where you can set the color and shape, and then selectively put those into models we are working on depending on what we want the kids to be able to do, and either way you don’t need to worry about those details so long as we have a nice setup for adding proper-ties to agents and making them available via the library, modeling interface, and agent interface. 3) Develop a pattern / culture of thinking of modeling as a verb, and not as a noun (model). That is, we want kids to be thinking constantly about showing information and having it relate to other ideas they have incorporated in their model along with their own embodiment, and then modify both. We think by treating this as “agents all the way down” it increases the chance of them getting in that mindset and not thinking of it as a fully different set of tools. At least, that’s our hope. 4) If we need something really complicated, we can still ask you to build it at some point as a custom widget similar to the graphs. But then it’ll already be set to
Joshua's Whimsical Design