Open Gabry993 opened 2 years ago
Hey @Gabry993,
It won't be any faster, but you could probably use similarity
rather than affine
, eg.:
out = image.similarity(scale=1.2, angle=45)
The output image is the bounding box of the transformed image, with .xoffset
and .yoffset
set to the position of the origin. It just calls affine for you with a computed transform.
For translation, it's quickest to use the x
and y
parameters of composite
. You'll save transforming and compositing the non-overlapping parts.
If you are doing large scale reductions (a shrink of 2x or more), it'd be quicker to pick a lower res layer in the source WSI pyramid. You've probably thought of this.
Hello, I'm trying to understand how to optimize and run this operation as fast as possible: I have several wsi which I would like to rotate/translate and then join together according to their relative position (externally provided). In the attached script, I'm basically taking 3 images (
~250k*250k
each), setting white color as transparent, computing 3 affine transformations (which send each image in a 3 times larger one, translating the 2nd and the 3rd to the right) and then composing them together.The resulting tiff is what I expect, but it takes roughly 7/8 hours to compute on my laptop. Since I'm not sure at all that I'm using the proper functions in the right way, I was wondering if there are alternatives/proper ways to achieve what I need, which are also more efficient. I hope my request is clear enough. If not, let me know and I will try to elaborate better. Thank you very much!