nomic-ai / deepscatter

Zoomable, animated scatterplots in the browser that scales over a billion points
Other
1.01k stars 58 forks source link

Roadmap for 3D support? #30

Open lanesawyer opened 2 years ago

lanesawyer commented 2 years ago

Hi, I'm currently evaluating deepscatter as a potential tool to use to display brain cell data. The library is solid. We were able to get our own data displaying in the viewer, which was very exciting!

However, I noticed one of the principles was "This is a 2d library. No fake 3d.", but I wasn't quite sure what "fake 3D" meant. As part of my search for a suitable tool, I was hoping to find a library that could support both 2D and 3D datasets. Is there a roadmap for supporting 3D datasets in the future?

bmschmidt commented 2 years ago

Thanks!

I don't have any short term plans for 3d. The next step on the roadmap is more likely to be moving over to a WebGPU backend from WebGL which will enable increase performance and allow a lot more operations on different types of Arrow dataframes. That might provide a decent framework for 3d projections, which there is some groundwork for in the existing tiling libraries, but if I'm being honest the scenarios that I put time into this feature in the next 12 months all involve financial support.

"Fake 3d" meant more embeddings like this where the third dimension is mostly aesthetic.

But since you ask, a couple notes! My inclination is that 3d in this library would probably look different than in real spatial visualization libraries, because I'll keep following the rules of 2d scatterplots rather than those of perspectival cameras. The most important difference would involve what the point of reference and rotation mean. Usually 3d interfaces involve navigating a camera through a 3d space, with a near and far frustrum where the camera eye is outside the sphere.

Rather than have a trapezoidal frustrum (or just a long view) for a stereographic camera, my thought is deepscatter would have a rectangular frustrum with an orthograhic camera, which is how it handles 2d spaces.. Probably it would be necessary to allow 3d rotation for this to really make sense. "Zoom" would mean "contract the view cube towards the focal point" rather than "move the 3d camera along a direction." image

I have a hard time perfectly visualizing what this would look like and it might be a terrible idea; but it would also more easily generalize to four, five, or six dimensions, which I actually think is a bit worthwhile in the most common use I see for this this library (UMAP reductions of hundred-ish dimensional datasets.)

lanesawyer commented 2 years ago

I appreciate the quick response and all the info!