Closed rlskoeser closed 1 year ago
I see from the experimental Boltzmann wealth example that size
is indeed the correct parameter and I have it working now; not sure why it didn't seem to work before. Can anyone advise on a recommended range of scales? Seems to be quite different from the radius scale in mesa runserver.
Glad to know that the experimental frontend is being used.
I also wanted to use a Select input for a choice parameter and add an output that displays the current step. I have a fork of mesa where I've implemented these - should I open a pull request with these changes to get feedback? (I haven't written any tests or added documentation for the changes yet, but would do if these changes are welcome.)
You are welcome to do so. Note that the current user input code is not yet documented either: https://github.com/projectmesa/mesa/blob/24a1df30a12a399cc54d8ff2d578db57e0f37a4a/mesa/experimental/jupyter_viz.py#L136-L152. And they have only been tested manually.
I'm also interested in adding a reset button similar to the play/stop buttons, but I'm not sure how to programmatically restart the model; currently the only way to reset is to change one of the input parameters.
It's easier if I do this instead. You should open an issue for this.
Another question: when running with jupyterviz, how should I access the model in order to get collected data? I figured out I could get it this way: page.args[0].model.datacollector.get_model_vars_dataframe() — but that doesn't seem ideal!
I haven't set it up so that you can get it easily from page
, but page
is a reactive component, and is usually recreated whenever the page is updated, so it's easier to do data analysis and visualization from within the framework for now. Not sure what's the ideal solution yet.
Can anyone advise on a recommended range of scales? Seems to be quite different from the radius scale in mesa runserver.
The range is what you would expect from a Matplotlib scatter plot. I chose not to set it the same way as the old Tornado viz because people are more familiar with Matplotlib stuff. The more standard it is, the less surprising it is.
@rlskoeser Thanks for the feedback and interest. Just to emphasize, we are always looking for new contributors and perspectives. If you go through the contributors guide
That should get you going, but we are here to answer any questions.
@rht @tpike3 thanks both for the feedback. When I have time to circle back to this I'll create some issues and see about opening a PR.
@rlskoeser I see that you are still using both the old visualization and Solara visualization in https://github.com/Princeton-CDH/simulating-risk. If #1795 and #1796 are merged (which is very soon hopefully), will you switch to using only the Solara one?
@rht yes, I'd love to drop the code for the old visualization and run everything with Solara once the new code has feature parity. Are there any plans to add a user-input option to adjust the speed of rounds?
I'm also wondering about how to consolidate / control user inputs to keep it condensed for models where I want to have multiple controls. I may want to open more PRs with support for additional solara inputs (I added a checkbox in a local branch for the new model I'm working on)
Yes the plan is to have feature parity and eventually add more features to the solara frontend.
Could you clarify on this bit
I'm also wondering about how to consolidate / control user inputs to keep it condensed for models where I want to have multiple controls
I don't think I can follow you there
Are there any plans to add a user-input option to adjust the speed of rounds?
This is blocked by the fact that Solara's Matplotlib redraw recreates the Matplotlib object from scratch, and thus is very slow. This project would require gutting Solara's Matplotlib integration to use Matplotlib's figure update functionality -- I already know this is going to be very performant because I used this update feature when surveying ipywidgets, for a good Jupyter frontend for Mesa. Last resort would be to reimplement in HTML canvas, which Solara supports, and is essentially what the old visualization uses. This will be one of @ankitk50's main project if he can pull it off.
I think right now each input for a user-configurable model parameter takes up a whole row. If I have several different inputs it seems like it will get crowded and I don't think all of them need a full row. Is there a way I can combine some of them so that they are more like the step / play / stop buttons in a single row?
Are there any plans to add a user-input option to adjust the speed of rounds?
This is blocked by the fact that Solara's Matplotlib redraw recreates the Matplotlib object from scratch, and thus is very slow. This project would require gutting Solara's Matplotlib integration to use Matplotlib's figure update functionality -- I already know this is going to be very performant because I used this update feature when surveying ipywidgets, for a good Jupyter frontend for Mesa. Last resort would be to reimplement in HTML canvas, which Solara supports, and is essentially what the old visualization uses. This will be one of @ankitk50's main project if he can pull it off.
Is the altair figure integration any better? I have work in a branch where I've implemented a custom space drawer using altair because I wanted different shapes to represent agent choices and from what I could find matplotlib didn't support that.
I intentionally set it to take a whole row because in a Jupyter notebook, the width is rather small, and you can only add new stuff in the vertical direction. We could have an option to have a Jupyter-optimized layout, or a wide screen layout.
Is the altair figure integration any better?
I haven't compared/measured its performance yet. It's worth testing.
I intentionally set it to take a whole row because in a Jupyter notebook, the width is rather small, and you can only add new stuff in the vertical direction. We could have an option to have a Jupyter-optimized layout, or a wide screen layout.
Ah, ok, I see; that's reasonable. Having it work well in a notebook is important.
When I get the model with more use inputs added maybe I can share for feedback on how to make it more manageable.
Is the altair figure integration any better?
I haven't compared/measured its performance yet. It's worth testing.
my WIP altair figure is currently in a branch, but it's here, in case it's helpful https://github.com/Princeton-CDH/simulating-risk/blob/feature/hawk-dove/simulatingrisk/hawkdove/server.py#L93-L153
Are there any plans to add a user-input option to adjust the speed of rounds?
This is blocked by the fact that Solara's Matplotlib redraw recreates the Matplotlib object from scratch, and thus is very slow. This project would require gutting Solara's Matplotlib integration to use Matplotlib's figure update functionality -- I already know this is going to be very performant because I used this update feature when surveying ipywidgets, for a good Jupyter frontend for Mesa. Last resort would be to reimplement in HTML canvas, which Solara supports, and is essentially what the old visualization uses. This will be one of @ankitk50's main project if he can pull it off.
I don't think that is the bottleneck here.
At least on my laptop I can "run" the Viz much faster by rapidly clicking on the step button. The redraw feels almost instant. If I use the start button it redraws much slower (even with an interval set to 0), but that can't be because of the matplotlib redrawing
Try increasing the grid size (Schelling experimental?) and compare its snappiness with the old viz.
Tested it. Even with a 100x100 grid it only takes around 80ms. As a whole solara feels indeed slower, but I still don't believe that's because of matplotlib drawing and not something else
80 ms is way too fast for 100x100 grid. That means you ran it on a powerful machine.
I recalled doing line profiling to isolate the cause of the slowness, and nothing else was slow except for the Matplotlib part. You could disprove this.
80 ms is way too fast for 100x100 grid. That means you ran it on a powerful machine.
I recalled doing line profiling to isolate the cause of the slowness, and nothing else was slow except for the Matplotlib part. You could disprove this.
Wait, what? Are you implying that I am lying? And how should i disprove what you are recalling in your head? I totally believe that this is what is inside your head, I am just saying that it is false.
Normally if you make a claim (such that matplotlib drawing is too slow) you have to prove it, not the other way around. That would be absurd, wouldn't it? I could make all sorts of claims.
But if it makes you happy here you can see a screenshot of my claim.
Yes this is on a rather powerful machine (my work laptop), but I dont think its that fast. Honestly I would have expected it to be an order of magnitude slower, which for a 100x100 grid would still be fine imho.
But I am really wondering how you did the line profiling or profiling of solara components in general. Somehow my strategy using Jupyer Notebooks ´%%prunand
%lprun` magic functions doesnt work for solara components. Or did you just profile the plotting as well?
Wait, what? Are you implying that I am lying?
Did you not read my second sentence?
That means you ran it on a powerful machine.
But I am really wondering how you did the line profiling or profiling of solara components in general.
By placing time.time()
several times manually. Your benchmark screenshot is not sufficient because it's not on a live server with component updates.
That said, here is your benchmark with line-profiler
Line # Hits Time Per Hit % Time Line Contents
==============================================================
222 @profile
223 def make_space(model, agent_portrayal):
224 1 2.3 2.3 0.0 def portray(g):
225 x = []
226 y = []
227 s = [] # size
228 c = [] # color
229 for i in range(g.width):
230 for j in range(g.height):
231 content = g._grid[i][j]
232 if not content:
233 continue
234 if not hasattr(content, "__iter__"):
235 # Is a single grid
236 content = [content]
237 for agent in content:
238 data = agent_portrayal(agent)
239 x.append(i)
240 y.append(j)
241 if "size" in data:
242 s.append(data["size"])
243 if "color" in data:
244 c.append(data["color"])
245 out = {"x": x, "y": y}
246 if len(s) > 0:
247 out["s"] = s
248 if len(c) > 0:
249 out["c"] = c
250 return out
251
252 1 1641.7 1641.7 0.6 space_fig = Figure()
253 1 43096.0 43096.0 16.9 space_ax = space_fig.subplots()
254 1 5.0 5.0 0.0 if isinstance(model.grid, mesa.space.NetworkGrid):
255 _draw_network_grid(model, space_ax, agent_portrayal)
256 else:
257 1 209496.5 209496.5 82.4 space_ax.scatter(**portray(model.grid))
258 1 8.5 8.5 0.0 space_ax.set_axis_off()
259 1 44.9 44.9 0.0 solara.FigureMatplotlib(space_fig)
The bulk of L258L257 is Matplotlib, not the portray()
function.
The unit of the time of line-profiler is microseconds.
FYI, I ran into a problem rendering solara.FigureAltair
in jupyter and colab contexts. It displays just fine when I use the solara runserver locally; nothing renders, and no python errors when running in notebooks (although there are some js console errors).
I think this is a Solara problem and not mesa-specific, so I've opened an issue there, but thought I would mention here so you're aware and in case anyone knows how to address. https://github.com/widgetti/solara/issues/287
Thanks for weighing in @rlskoeser I saw that I am having a similar issue with the ipyleaflet and jupyter. Solara is still very cool!
I think all of my initial questions and related issues have been addressed, fine with me if y'all want to close this issue. (I'll open new ones for my new questions 😉 )
Hello and thanks for the great library.
I recognize that the JupyterViz is still experimental, but it seems like a great solution for allowing project collaborators to run simulations in Colab notebooks. I have questions about using it and also about submitting a PR with some improvements.
I am having some difficulty customizing the display - it took me a bit to figure out that the agent portrayal for
color
needs to be lowercase when previously it was uppercase. I haven't yet figured out how to change the size; previously I was usingr
for radius; I seesize
in the jupyterviz code but setting that in my agent portrayal doesn't seem to have any effect. I've looked at the code inmesa.experimental.jupyter_viz
but I'm not clear on how that's generated; I guess it's part of a matplotlib figure?I also wanted to use a
Select
input for a choice parameter and add an output that displays the current step. I have a fork of mesa where I've implemented these - should I open a pull request with these changes to get feedback? (I haven't written any tests or added documentation for the changes yet, but would do if these changes are welcome.)I'm also interested in adding a reset button similar to the play/stop buttons, but I'm not sure how to programmatically restart the model; currently the only way to reset is to change one of the input parameters.
Another question: when running with jupyterviz, how should I access the model in order to get collected data? I figured out I could get it this way:
page.args[0].model.datacollector.get_model_vars_dataframe()
— but that doesn't seem ideal!related issues
1773 / #1775
1776
1777
1778