Open benloh opened 3 years ago
@jdanish A few questions and assumptions about the use model for a map editor. There are two competing approaches:
e.g. one group might be running the sim, updating scripts on the main projection screen with Mission Control while a second group might be actively editing the map (e.g. placing instances for the sim initial state) on a second laptop. This suggests that the Map Editor might be running a separate simulation from Mission Control.
What happens when scripts are updated and submitted by another device? Map Editor should probably ignore these updates (otherwise agents will get re-inited to their starting positions in the middle of editing)? And only update when/as they make changes to the map?
What happens when the Map Editor updates the position and number of instances on the Map? Mission Control should ignore these updates until the next time a user submits a script?
This approach offers more flexibility across the classroom, but is more complex.
An alternative approach would be to have only a single simulation running for the whole group on Mission Control? If that's the case, then when a Map Editor is opened, Mission Control cedes control of the simulation to the Map Editor. Map Editor could also be implemented as a subpanel or mode for Mission Control, so there is only ever one instance?
This is slightly simpler but it does mean that it restricts map editing to one device, and potentially clutters up Mission Control. You also wouldn't be able to do something like watch a simulation run while you edit the starting map.
I'm sure there are more issues, but it'd be helpful if you had any initial thoughts on how you expect students to edit maps and run simulations.
In GitLab by @jdanish on Feb 23, 2021, 06:31
Thanks Ben! I will raise this in our 11am ET meeting with VU and then get you some notes shortly after.
Joshua
Joshua responds (See email "Some replies / decisions" dated 2/23/2021):
We talked over what I understood to be the outstanding queries, and added some notes that I am sharing below. Please let see know if anything is missing, off, or problematic.
1: We are in agreement that more script functionality will be better than rushing to a UI for kids, so let’s focus on making the scripting more robust after the tracking systems are integrated
2: Once the tracking system is done, moving towards the map editor makes sense as well (we’ll let you pick the order).
3: Here is our sense of the thinking around the map editor:
At any given time, a “model” (file in GEM-STEP) will have only 1 map, and we assume that folks will either be running it or editing it together. The map is part of the model in our mind. So, if we need two maps, one that Noel’s group is using and one that Corey’s group is using, we’d make 2 copies of the model: Noel’s Model and Corey’s Model and then each group could play with their map and code to their hearts content.
If you are running the simulation, you need to stop it to edit the map. If someone tries to edit the map from another device, they’ll be alerted. And vice-versa.
It’d be nice if two devices could edit the map / simulation at the same time ala net.create or meme, but if not that’s fine.
We like the idea of the map editing being tied to the mission control as a mode for this reason. Honestly, in my tinkering, I find it annoying to move between the script editing and mission control. This is especially true because I like being able to move things around in mission control, and can’t do that in script editing. I assume the reason you don’t have the script editor as a panel in mission control is because too many people might edit at once?
If you save a script, it should update the agents next time the simulation is reset. We might want an alert that the simulation is running so changes won’t be seen until it is re-started, but otherwise that’s fine.
I assume that if two people are editing the same agent, the second will get a warning / not be able to edit it so that they are not clobbering each other?
4: As we thought about the editor, we realize a few things we wanted to note.
We presume that this is where you’d also note whether someone moving into the space becomes a specific kind of agent?
We assume it’d be easy to click a button (or drag) and add a single of a specific agent type (fish or algae) and then move it around.
It’d be nice to be able to add more than 1 at a time.
We assume there will be a “birth” event where we can do things like set a random location or random energy?
a. It’d be nice if this has access to variables like the name / label, and possibly the pozyx id? That way this is where we might say “if tag 1, call noel” etc. Ideally that would all be visual somewhere instead, but maybe not.
In an ideal world, we’d be able to also set “birth” conditions via something like the instance viewer? So if you want to have 3 AI fisher and one is fat and one skinny, and one in the middle, you can either
a. Write the birth code to somehow pick using a global variable?
b. Add the fish, select it, and set its starting energy / etc.
c. It’d be reasonable to assume that all birth code runs even if there are things set via interface, so we might want a way of checking if something was already set via the visual. The best practice would be to not set something in both, but we’d want a clear approach to handling overlap. That is, we might have a default energy, and not use it if an agent was also set to have energy via the editor.
5: In their pseudo-coding efforts, VU realized there might be cases where there are functions that our AI agents can have that access that are less obvious when using tracking. We wanted to note this, and aim to have parallel functions so that the code doesn’t need to change. The case in point was facing - we imagine that a lot of AI movement might be tied to the direction an agent is facing. Facing is harder to identify in p-track, though we believe you did a calculation in prior step work based on most recent movement. So that’d be great to maintain. If “direction” isn’t on the list of script functions, we would like it added at some point :) (this turns out to be key in the moth situation)
First, one note about the system architecture: Currently the simulation itself is running in the browser. MissionControl has a Panel called PanelSimulation that actually runs the simulation code. However that's merely a placeholder implementation. The overall system is architected such that the simulation can run anywhere, server-side or client-side (browser). This means you can have multiple simulations running simultaneously on multiple devices in a single network and is quite powerful and flexible. For instance, we can potentially have each group and subgroup running / testing their own simulation all sourced off a single server for the classroom (whether or not the server/network can handle this is something we'd still have to test). Part of what we're trying to figure out at the moment is what kind of configuration makes sense given the workflow needs of the classroom and individual groups and subgroups, how much needs to be shared, how much should be isolated.
Sri is working on the tracking, I'm working on the Map Editor and scripting updates for now, so both tracks are moving forward.
3.1. At any given time, a “model” (file in GEM-STEP) will have only 1 map, and we assume that folks will either be running it or editing it together. The map is part of the model in our mind. So, if we need two maps, one that Noel’s group is using and one that Corey’s group is using, we’d make 2 copies of the model: Noel’s Model and Corey’s Model and then each group could play with their map and code to their hearts content.
OK good. That's my assumption too: a one-to-one mapping between models and maps.
Also, just a confirmation/clarification: I'm assuming that the classroom is divided into "groups" -- a set of students who are focused on a particular model. So a classroom might have multiple groups working on different Algae models, or perhaps each group has their own domain (e.g. Algae vs Decomposition). Within each group, I assume that there might be "subgroups" -- one or two students who are focusing on a specific aspect of the model, e.g. editing a particular blueprint, defining the map, etc.
3.2. If you are running the simulation, you need to stop it to edit the map. If someone tries to edit the map from another device, they’ll be alerted. And vice-versa.
This makes sense, but see discussion on 3.6, below.
3.3. It’d be nice if two devices could edit the map / simulation at the same time ala net.create or meme, but if not that’s fine.
We'll keep this in mind as a nice to have, but it does complicate things. The key is figuring out what grain size to lock out access (e.g. per instance vs whole map).
3.4. We like the idea of the map editing being tied to the mission control as a mode for this reason.
OK.
3.5. Honestly, in my tinkering, I find it annoying to move between the script editing and mission control. This is especially true because I like being able to move things around in mission control, and can’t do that in script editing. I assume the reason you don’t have the script editor as a panel in mission control is because too many people might edit at once?
Multiple issues here.
3.6 If you save a script, it should update the agents next time the simulation is reset. We might want an alert that the simulation is running so changes won’t be seen until it is re-started, but otherwise that’s fine.
I think this is a question of workflow, especially around Mission Control. In my reasoning, you have three different phases of work: script editing, map editing, and model testing. Each has slightly different workflows.
Script Editing
With script editing, the focus is distributed: multiple subgroups are working on different scripts simultaneously. Mission Control displays a shared model, but is not being actively managed. In this model, it's important that subgroups have independence. I'm assuming you might have, say, one group editing Fish and a second group editing Algae while Mission Control is projected at the front of the classroom (or is displayed on the group's shared central laptop). I would think that as the Fish group finshes an edit, they should be able to send the script to the simulation to test it immediately, even if the second group isn't finished. That way they can both edit and test simultaneously in the shared space. This is how the Script Editor and Mission Control currently work together: as soon as a particular blueprint is sent, the simulation compiles the blueprint, removes all existing instances of that blueprint, and creates new ones based on the init spec (map editor). All the while, not touching instances created by the other group/blueprint.
Map Editing
Map Editing has a slightly different workflow than Script Editing, but it might occur concurrently with Script Editing. The assumption is that if you're editing a map, you already have some scripts defined. And the important task is that you can define and position instances and set initial property values. So if someone is editing scripts and sends an update, the Map Editor should not bother to update until the next cycle. Currently, since init
scripts are separate from the blueprint scripts, this is fairly straightforward, e.g. a script update won't trigger init, and vice versa. Things do get messy though: If a script is actively being edited on a different device, when Map Editor needs to update, does it query the device for the current version of the script, or does it keep a cached version of the most recently submitted script, or does it just load the script that had been saved with the model? Most likely we'd go with the most recent submission and fall back to the saved script.
Model Testing
In Model Testing, I would think we want Mission Control to control the starting and stopping of the simulation. We would not want ScriptEditors sending script updates to cause the simulation to recompile an re-instantiate the blueprint until Mission Control is finished with the test run. If Map Editing is handled directly on Mission Control, then we don't need to worry about Map Updates triggering a sim restart.
So perhaps one way to handle this is to have Mission Control default to Script Editing mode, always showing instances, auto-starting the sim whenever a script is submitted. If a user selects Map Edit mode, the sim is stopped, instances revert back to their init positions, and script updates do not trigger auto-starts.. Once a user clicks "Run", Mission Control enters "Model Testing" (or perhaps "Model Running") mode, and again script updates do not trigger auto restarts. Instead, sim playback and resets are handled via the Sim Control buttons.
3.7 I assume that if two people are editing the same agent, the second will get a warning / not be able to edit it so that they are not clobbering each other?
My intended design was to basically do blueprint-locking on a first come first served basis. If you're the first person to check out a script you lock it. The second person to try to check out a script will be able to view it, but not make any changes. The system will say "Phil is editing FIsh" or something to that effect.
4.1. We presume that this is where you’d also note whether someone moving into the space becomes a specific kind of agent?
Yes, this is probably defined in the init script/instance setting UI.
4.2. We assume it’d be easy to click a button (or drag) and add a single of a specific agent type (fish or algae) and then move it around.
That is what I'm working on right now.
4.3. It’d be nice to be able to add more than 1 at a time.
I assume if we're talking about auto-populating, we'd also be randomly placing the instances? Do you have a sense of how many you would want to add? e.g. 20 algae at a time, 1000 poop? Somewhere in between?
4.4. We assume there will be a “birth” event where we can do things like set a random location or random energy? a. It’d be nice if this has access to variables like the name / label, and possibly the pozyx id? That way this is where we might say “if tag 1, call noel” etc. Ideally that would all be visual somewhere instead, but maybe not.
Yes this is exactly what the current init
script does. So you can already do this at least via scripting. Figuring out how to support that degree of flexibility via the UI is another matter.
Mappping inputs is something we do plan on doing as well, though we still have to work out the specifics.
4.5. In an ideal world, we’d be able to also set “birth” conditions via something like the instance viewer? So if you want to have 3 AI fisher and one is fat and one skinny, and one in the middle, you can either a. Write the birth code to somehow pick using a global variable? b. Add the fish, select it, and set its starting energy / etc. c. It’d be reasonable to assume that all birth code runs even if there are things set via interface, so we might want a way of checking if something was already set via the visual. The best practice would be to not set something in both, but we’d want a clear approach to handling overlap. That is, we might have a default energy, and not use it if an agent was also set to have energy via the editor.
Yes! Our original UI mockup had something along these lines. We're working through this stuff now.
5: In their pseudo-coding efforts, VU realized there might be cases where there are functions that our AI agents can have that access that are less obvious when using tracking. We wanted to note this, and aim to have parallel functions so that the code doesn’t need to change. The case in point was facing - we imagine that a lot of AI movement might be tied to the direction an agent is facing. Facing is harder to identify in p-track, though we believe you did a calculation in prior step work based on most recent movement. So that’d be great to maintain. If “direction” isn’t on the list of script functions, we would like it added at some point :) (this turns out to be key in the moth situation)
Yeah, directionality is something we can calculate, and in face the wander code is already using this concept to make the wandering more natural and less jittery.
It does bring up some interesting issues with regard to sprites though: e.g. the current fish sprite is a side view of a fish. Do we just blindly rotate the fish to set its direction, disregarding that it might be moving upside down? Or do we need to add more sophisticated costume controls to do stuff like dynamic flipping? This is is somewhat ameliorated by using a top-down view of sprites, but that might not be a solution for something like decomposition that is dependent on a side view.
Can you be more specific about why directionality is important for moths?
In GitLab by @jdanish on Feb 23, 2021, 14:07
Lots to think about and respond to, thanks! Some quick replies to a few things:
It does bring up some interesting issues with regard to sprites though: e.g. the current fish sprite is a side view of a fish. Do we just blindly rotate the fish to set its direction, disregarding that it might be moving upside down? Or do we need to add more sophisticated costume controls to do stuff like dynamic flipping? This is is somewhat ameliorated by using a top-down view of sprites, but that might not be a solution for something like decomposition that is dependent on a side view.
I'd imagine that with the current model you can either set the costume within script however you want, or we can settle on a default and add that to the motion or costume feature. Perhaps it is something you activate with a function call (changeCostumeOnMoveTopdown) or set as a property (bChangeCostumeOnMoveTopdown). In that case I think assuming a top-down (basic) model that just "works" most of the time seems nice. Later we might add other feature options / calls that allow other nuances but that'd give us flexibility?
Can you be more specific about why directionality is important for moths?
Right now, the idea is that there are birds (predators), controlled by kids (later AI) that are looking for moths to eat. If the moths are the same color as the tree they are on, the predators can't see them because they are camouflaged. If the moth isn't in front of a tree, or is a different color, the predator can see them. Because the kids are the predator (bird), we would need to show or hide the moth depending on whether they should be able to see it. So, in a sense, this is similar to how in STEP Bees we had flowers appear when kids were near them. However, rather than focusing on when birds are "near" (though there might be a range, or cone of visibility) we want to know if a bird is looking towards them. If a bird is looking towards them and they are not camouflaged, they'll appear on screen. If either is not true, they'll be hidden so that the birds (kids) have to search.
In GitLab by @jdanish on Feb 23, 2021, 14:14
I assume if we're talking about auto-populating, we'd also be randomly placing the instances? Do you have a sense of how many you would want to add? e.g. 20 algae at a time, 1000 poop? Somewhere in between?
I had been thinking closer to 20. However, as I reflect I imagine one of either two situations exists:
So, for simplicity, let's ignore this for now assuming 1 and 2 are doable?
In GitLab by @jdanish on Feb 23, 2021, 14:20
Clarifying questing: Will the script editor be running its own local simulation? The reason I ask is that I think the current model it is showing the mission control one, but you can't interact. Hence my "annoyance". If it's just running the simulation but with the current script added in, and fake track to test things then that sounds awesome, and I see the value in keeping Mission Control separate. And I assume that when you want to test in front of the class, you show MissionControl and someone runs fake track on an iPad. Or multiple.
In GitLab by @jdanish on Feb 23, 2021, 14:30
Also, just a confirmation/clarification: I'm assuming that the classroom is divided into "groups" -- a set of students who are focused on a particular model. So a classroom might have multiple groups working on different Algae models, or perhaps each group has their own domain (e.g. Algae vs Decomposition). Within each group, I assume that there might be "subgroups" -- one or two students who are focusing on a specific aspect of the model, e.g. editing a particular blueprint, defining the map, etc.
That sounds right to me.
We'll keep this in mind as a nice to have, but it does complicate things. The key is figuring out what grain size to lock out access (e.g. per instance vs whole map).
Agreed. And likely "it depends" so being careful not to make it impossible later while focusing on other issues makes sense to me for now.
Being able to move things around in ScriptEditor should be possible once we have FakeTrack. At least that was my original thinking -- I always assumed that as you edit the script, you'd need to be able to manipulate an agent instance. There might be some limitations we need to work around, but that is certainly the intent.
Just to make sure we are on the same page: right now, the fish and algae do their thing via AI. Sometimes, the fish are gonna die because they are too far from algae. However, I haven't had a chance to test their eating code! So, the easiest thing for me to do is grab them in Mission Control and move them close to an Algae and that works. That's awesome, and helpful for testing, and I'd like to keep it independent of the other issues / changes. It'd be nice to do that either in script editor or mission control at least for all AI-controlled agents. Then, in addition, if an agent is tracking controlled, you could use FakeTrack to move it around and that'd also be nice as a way to test. For avoiding confusion it might make sense to keep these secret - you can force move a non-tracked object to test how it interacts, or you can use tracking (any of them) to test what happens when something moves via FakeTrack or other tracking. Then that works across tracking systems nicely I think.
question of workflow
I think this makes sense to me. In my mind I had originally assumed that map editing was mart of model testing as it seems silly to update the map if you aren't testing a model. However, I don't see a downside of keeping it flexible.
This part is starting to feel like we need some sort of state diagram or flow chart to make sure we know all of the use cases and save / error cases that are related to them. Is that just me? If you can "picture" it all, I am happy to wait and test later. But if you agree, I'd be happy to work on drafting the flow with you or reviewing it once you do.
top down sprites
Yes, I was assuming that was the approach we'd take, but did want to call it out since your team should be aware of that when making starter sprites. (The sprites in the system now are actually my re-creation of your team's sprites because I couldn't get a clean version otherwise).
moths
OK so this sounds more like a new when
condition "is looking at" or "can see" or something to that effect? This is helpful.
auto-populating
Hitting "add" 20 times will be doable as soon as I add that.
Doing it via script is not possible yet because we don't have spawning working yet. But I imagine this might be something like an init script for the World agent.
In GitLab by @jdanish on Feb 24, 2021, 09:31
Great.
In GitLab by @jdanish on Feb 24, 2021, 09:32
Sounds good. And short-term, we can test it out once we have the spawn command by having some other agent create them. So the init call on the "pond" could actually produce fish if we need.
script editor's simulation
Currently the model is to have the script editor be a viewer off of the simulation running on Mission Control. The idea is that this way you can more easily test interactions between agents.
If script editors ran their own simulations, then in order to test an agent like Fish, you'd have to figure out how to introduce algae to the sim. Our architecture can support this, but it's not the way it's built out at the moment.
So you should assume that some kind of instance control will be possible in Script Editor. But if you would like people who are editing and testing scripts to work independently on their own simulation, we should adjust our approach now.
In GitLab by @jdanish on Feb 24, 2021, 09:38
No, that works. So basically if a group has 2 sub-groups, one editing fish and one editing algae, they are both working off the currently projected sim. It's a social problem to not change the algae under the fish and vice versa without conversation, which works fine. Let's run with that since if we really need, we can always duplicate a model and have 2 groups use similar but different models. So the duplication of models and sharing of scripts remains important for this long-term flexibility (but we don't need it anytime soon).
AI vs FakeTrack control
I'm assuming that even for agents under AI control, we would have some mechanism for moving them around during testing, either via a faketrack-like interface or direct manipulation (I think we can add drag and drop to a viewer, but I'm not 100% sure). Worse case, the direct drag will be a super-user type feature only available on mission control.
state diagram
A state diagram is an excellent idea. We are certainly rapidly approaching the point where that would if nothing else facilitate conversations. I think it was hard to see the big picture before because we hadn't built out all the parts and experienced them working with each other. If you'd like to take a crack at it, that would be great. I just created a blank whimsical diagram (and invited you): https://whimsical.com/gem-step-user-flow-MMKxawUBV9q7d21UErS7US@2Ux7TurymN2VtyBwwNS5
In GitLab by @jdanish on Feb 24, 2021, 09:49
I'm assuming that even for agents under AI control, we would have some mechanism for moving them around during testing, either via a faketrack-like interface or direct manipulation (I think we can add drag and drop to a viewer, but I'm not 100% sure). Worse case, the direct drag will be a super-user type feature only available on mission control.
Cool. It works great now the way it is in mission control.
A state diagram is an excellent idea. We are certainly rapidly approaching the point where that would if nothing else facilitate conversations. I think it was hard to see the big picture before because we hadn't built out all the parts and experienced them working with each other. If you'd like to take a crack at it, that would be great. I just created a blank whimsical diagram (and invited you): https://whimsical.com/gem-step-user-flow-MMKxawUBV9q7d21UErS7US@2Ux7TurymN2VtyBwwNS5
Lol - I was hoping you'd volunteer. But sure, I can take a stab. It might be a day or 2, though.
Actually the Map Editor was starting to move in this direction before we started this conversation (that's what triggered this thread): In the current implementation it runs its own simulation apart from Mission Control, so it can do things independently, e.g. you can manipulate agents without worrying about affecting what someone might be testing. This allows you to have multiple groups working on script editing and testing while another group works on the map without clobbering each other's testing. I have to do a little more work to make the simulations truly independent of each other (right now they send out conflicting updates to the viewers. I'm wondering if maybe I commit this and let you play with it.
Heh. I can do it, but I'm in the guts of map editor at the moment, so it might be a while before I get to it (like maybe a week). We can punt for now...
In GitLab by @jdanish on Feb 24, 2021, 10:20
I'm certainly happy to play whenever you are ready! And if you let us know what to test we certainly will.
In GitLab by @jdanish on Feb 24, 2021, 10:20
Well, let's see who gets there first!
In GitLab by @jdanish on Feb 25, 2021, 12:18
First draft is now in Whimsical: https://whimsical.com/gem-step-user-flow-MMKxawUBV9q7d21UErS7US
Implemented with !55.
In order to test a complete simulation, we need to be able to define:
Are instances spawned by the World agent?
What is the relationship between pre-defined instances and "real-time" instances being introduced with Edit Script "Send to Server" calls?
Should the simulation "Start" button automatically populate predefined instances? Do you see predefined instances sitting statically on-screen when the model loads? How do you define predefined instances? What screen is that set up on? Mission Control? A new Map Edit screen?