Closed tchamberlin closed 1 year ago
I've been trying the following:
ds.tf.Image
objects as parquet files instead of imagesds.tf.Image
objectsds.tf.stack(*images)
to stack themBut whenever I pass in multiple different image objects to the stack method, I get a weird error about a ufunc not supporting the input types :( I'll be trying other methods next week
Update: I fixed the ufunc error earlier by including x-ranges in the datashader canvas options (the xarrays initially weren't aligning properly). But using ds.tf.stack
still gives weird results -- points seem to be missing, and stacking more images results in fewer points (?)
I've been trying stacking at different points of the image generation process. Right now, stacking seems to be successful when I save the projected geoviews points data as parquet files, concatenate them as dataframes, and shade afterwards. But when working with dynamic plots, holoviews complains about saving the stacked shaded image (something about RGBA values being float64 and not uint8 values)
For a given two sessions:
Provide images for each, so we can determine whether they are equivalent
The overall plots seem similar, but I realized that the colors are actually different for some points (I was using a different colormap earlier). It seems like stack_parquets is more accurate because stack_shaded shows the upper right points in the same color as the denser lower left region?
Sounds like sticking with the parquet stacking makes the most sense! Good work
I think it should be possible to "stack" (i.e. add along Z axis) a given set of "shaded" plots together. Imagine two scenarios:
datashader
to generate a scatter plotdatashader
to plot a given set of Sessions' Antenna Positions to discrete plots on disk, one per Session. Then we stack the resultant images together using datashaderIf this idea is viable, the end result of (1) and (2) should be identical. This issue therefore covers whatever proof of concept is necessary to determine this