holoviz-topics / neuro

HoloViz+Bokeh for Neuroscience
BSD 3-Clause "New" or "Revised" License
17 stars 5 forks source link

Add Panel and HoloViews benchmark #74

Closed ianthomas23 closed 8 months ago

ianthomas23 commented 11 months ago

This builds on top of #73 and will need to be rebased against main after that PR is merged.

It adds a benchmark of a Panel and HoloViews example that was supplied by @droumis and @philippjfr. It runs fine provided each benchmark is only run once (using the -q flag), i.e. asv run -e -b Panel -q.

Running each benchmark multiple times in the normal manner (asv run -e -b Panel) gives the following error:

RuntimeError: Models must be owned by only a single document, ImportedStyleSheet(id='p1005', ...) is already in a doc

which I think implies that the Bokeh/Panel/Tornado servers are not restarting as I intended between multiple repeats of the same benchmark.

I will continue with this next week.

droumis commented 11 months ago

Nice!

❯ asv run -e -b Panel -q
· Creating environments
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.11-playwright
·· Building d8a1ed52 <panel_holoviews_benchmark> for virtualenv-py3.11-playwright...
·· Installing d8a1ed52 <panel_holoviews_benchmark> into virtualenv-py3.11-playwright.........
· Running 1 total benchmarks (1 commits * 1 environments * 1 benchmarks)
[  0.00%] · For hvneuro commit d8a1ed52 <panel_holoviews_benchmark>:
[  0.00%] ·· Benchmarking virtualenv-py3.11-playwright
[ 50.00%] ··· panel_holoviews_example.PanelHoloviewsExample.time_latency                                                                                                                                   ok
[ 50.00%] ··· ========= ========= =========
              --           output_backend  
              --------- -------------------
                  n       canvas    webgl  
              ========= ========= =========
                 1000    184±0ms   125±0ms 
                10000    131±0ms   143±0ms 
                100000   237±0ms   246±0ms 
               1000000   1.72±0s   1.68±0s 
              ========= ========= =========
ianthomas23 commented 8 months ago

The fix here turned out to be very simple. There are three attributes controlling how many individual benchmarks are run by asv and how they relate to each setup and teardown pair of calls. For the Panel-based tests using repeat=1 means just one benchmark is run for each setup and teardown and this works well.

On my dev machine (M1 mac without dedicated graphics):

$ asv run -e
· Creating environments
· Discovering benchmarks
· Running 3 total benchmarks (1 commits * 1 environments * 3 benchmarks)
[  0.00%] · For hvneuro commit 8128c61c <main>:
[  0.00%] ·· Benchmarking virtualenv-py3.11-playwright
[ 20.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 60.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 73.33%] ··· Running (bokeh_example.BokehExampleZoom.time_zoom--).
[ 80.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--).
[ 86.67%] ··· bokeh_example.BokehExampleLatency.time_latency                                                               ok
[ 86.67%] ··· ========= ============ ==========
              --             output_backend    
              --------- -----------------------
                  n        canvas      webgl   
              ========= ============ ==========
                 1000     71.8±2ms    75.3±4ms 
                10000     78.5±4ms    93.0±2ms 
                100000    197±4ms     208±3ms  
               1000000   1.61±0.01s   1.61±0s  
              ========= ============ ==========

[ 93.33%] ··· bokeh_example.BokehExampleZoom.time_zoom                                                                     ok
[ 93.33%] ··· ========= ========== ==========
              --            output_backend   
              --------- ---------------------
                  n       canvas     webgl   
              ========= ========== ==========
                 1000    60.5±2ms   65.1±7ms 
                10000    64.8±7ms   56.3±1ms 
                100000   76.8±1ms   67.9±6ms 
               1000000   289±5ms    240±6ms  
              ========= ========== ==========

[100.00%] ··· panel_holoviews_example.PanelHoloviewsExample.time_latency                                                   ok
[100.00%] ··· ========= ========= ============
              --            output_backend    
              --------- ----------------------
                  n       canvas     webgl    
              ========= ========= ============
                 1000    107±2ms    110±4ms   
                10000    121±9ms   126±0.6ms  
                100000   228±3ms    230±6ms   
               1000000   1.59±0s   1.57±0.01s 
              ========= ========= ============

This was using asv 0.5.1 as there are some changes in asv 0.6 that I haven't dealt with yet.

@droumis It would be good if you could see if you can run this locally now.

ianthomas23 commented 8 months ago

Also, this was using (in the asv virtual environment) the latest Bokeh 3.3.0, Panel 1.3.0, Param 2.0.0 and HoloViews 1.18.0 without any problems.

droumis commented 8 months ago

Great work!


❯ asv run -e
· Creating environments
· Discovering benchmarks
· Running 3 total benchmarks (1 commits * 1 environments * 3 benchmarks)
[  0.00%] · For hvneuro commit 519ee3c2 <main>:
[  0.00%] ·· Benchmarking virtualenv-py3.11-playwright
[ 20.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 60.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--)..
[ 73.33%] ··· Running (bokeh_example.BokehExampleZoom.time_zoom--).
[ 80.00%] ··· Running (panel_holoviews_example.PanelHoloviewsExample.time_latency--).
[ 86.67%] ··· bokeh_example.BokehExampleLatency.time_latency                                                                                  ok
[ 86.67%] ··· ========= ============ ============
              --              output_backend
              --------- -------------------------
                  n        canvas       webgl
              ========= ============ ============
                 1000    60.6±0.9ms    71.8±7ms
                10000     80.7±5ms     95.2±5ms
                100000    217±2ms      222±4ms
               1000000   1.84±0.01s   1.80±0.01s
              ========= ============ ============

[ 93.33%] ··· bokeh_example.BokehExampleZoom.time_zoom                                                                                        ok
[ 93.33%] ··· ========= =========== ==========
              --            output_backend
              --------- ----------------------
                  n        canvas     webgl
              ========= =========== ==========
                 1000     51.9±3ms   42.7±3ms
                10000    41.5±10ms   43.6±7ms
                100000    75.6±2ms   71.9±3ms
               1000000    297±4ms    242±4ms
              ========= =========== ==========

[100.00%] ··· panel_holoviews_example.PanelHoloviewsExample.time_latency                                                                      ok
[100.00%] ··· ========= ============ ============
              --              output_backend
              --------- -------------------------
                  n        canvas       webgl
              ========= ============ ============
                 1000     108±10ms     112±10ms
                10000     122±4ms      133±6ms
                100000    235±6ms      253±5ms
               1000000   1.71±0.03s   1.68±0.01s
              ========= ============ ============
ianthomas23 commented 8 months ago

Here are some thoughts on debugging benchmarks. So far when things go wrong it is usually due to communications or timeout issues, and everything just freezes making it difficult to debug. What I do is limit the set of params for the benchmark in question and turn off the browser headless mode, i.e. this change line in base.py:

self._browser = playwright.chromium.launch(headless=True)

into

self._browser = playwright.chromium.launch(headless=False)

Then run the benchmark in quick mode, e.g. something like asv run -b Panel -e -q and the browser will appear. If the benchmark is waiting for something to happen you can open the browser console to see what is going on. Sometimes adding extra timeouts to the benchmark helps, otherwise they can run too fast to really understand what is happening.