Closed pakiessling closed 3 weeks ago
Hello @pakiessling,
Normally, the .zarr
should contain the table. It should be inside DATA_DIR.zarr/tables/table
. You don't have such a directory? Even when running the toy dataset from the tutorial?
Yes, the intrinsic coordinate system is in microns if you use Baysor. Since we use SpatialData, you can convert it to any coordinate system using the spatialdata.transform
API. This way, you can transform the shape coordinates into pixel coordinates if desired!
Ty!
It turns out something with my Spatialdata installation was messed up. In a fresh enviroment the table loads just fine.
Nice, good to hear!
Not really related to Sopa but do you know how to go about using spatialdata.transform? Do I first have to attach an image to spatialdata like:
Image2DModel.parse(image, dims=("y", "x", "c"))
and this will bring its pixel coordinate system with it?
You need to provide the name of the coordinate system on which you want to transform your shapes For instance, if the pixel coordinate system is called "global":
element = sdata["baysor_boundaries"]
baysor_boundaries_pixel = spatialdata.transform(element, to_coordinate_system="global")
In case you don't have a specific coordinate system for your image, you can either add it (using an Identity transformation + the set_transformation API). But the easiest way to do that is probably to use the to_intrinsic function from sopa.
Hi,
I am running Sopa Baysor segmentation via Snakemake as shown in the tutorial. It works quite well and is much more convenient than writing my own code for running Baysor on chunks.
I noted that the output .zarr does not contain a cell x gene matrix but only points, images and shapes. It is not a big deal as there is an anndata in the .explorer, but shouldn't this also be in the zarr by default?
Another question I have is about the segmented shapes. For Baysor I assume this in the coordinate system of µm as that are the input coordinates or is this converted to pixel to match the image in the zarr?