Open Hazelfire opened 2 years ago
I think it's more likely that the cost is somewhere in rendering point sets. In that case we'd want to cut down the number of points before rendering by averaging groups of them together or something. It should be possible to test this by running a notebook that plots many symbolic distributions converted directly to point sets, as this doesn't go through KDE.
Description:
I'm not sure why, but performance is really poor in observable notebooks. My best guess currently is that KDE takes a lot of time, as the performance issues seem somewhat correlated with the number of graphs I render at the same time.
This could give more reason to use histograms rather than KDE to display sample set distributions. Which would also fix the issue of rendering samples below 0.
Steps to reproduce:
Some of my notebooks here: https://observablehq.com/@hazelfire/quantifying-uncertainty-in-givewell-ceas (particularly Helen Keller and Malaria Consortium, but also just the main notebook)