Open Xqua opened 6 years ago
Hi @Xqua ,
I'm assuming that you have installed BigStitcher? Doing this will also update the Multiview Reconstruction.
In the work on BigStitcher, we introduced the Tile
attribute (in addition to the existing Channel, Illumination, Angle, TimePoint
), representing the (x,y,z)-stage coordinates at which an image was acquired.
Since MVR and BigStitcher share the same data model (esentially, they are two 'modes' of the same plugin), you are seeing Tiles
in the Multiview Reconstruction.
Since the stage coordinates typically differ for (tiled) acquisitions from multiple angles, we decided to assign a separate Tile
to every (x,y,z,angle)-combination (instead of 1 tile for all angles). So it is perfectly normal to see 4 Tiles in a 4 Angle dataset (each angle has one tile).
I hope this explains what the 'Tiles' are and where they come from.
Best, David
Makes complete sense !
It actually should also make the RANSAC part much faster in theory if you are starting from a "good" starting point !
Actually, if you have this info, you could also make a phase correlation alignement without beads no ?
Hi @Xqua
I don't think the RANSAC speed is affected that much, since we will still do it with all interest points of two images (as long as they have nonempty overlap). The main parameter to affect the speed of the pairwise RANSAC would be RANSAC iterations
(https://imagej.net/BigStitcher_Registration#Specific_Registration_Options). But I think you are correct, in theory we could just use the interest points from the overlap volume to have a much smaller candidate set and speed up the process.
Regarding the phase correlation alignment, that is exactly what we do in BigStitcher. Out Basic workflow there is:
(Angle, TimePoint)
-combination, we align the Tiles
using pairwise phase correlation followed by global optimization. Here, the (x,y,z)-metadata can speed things up considerably, since we only have to align the overlapping volumes to get the relative shift of two images. Also, without metadata, we would have to do all-to-all alignment, which still works most of the time, but is obviously much slower.We also have expert options (https://imagej.net/BigStitcher_Advanced_stitching) in BigStitcher that would allow you to align Angles
using phase correlation (among other things). This will only do a translation model, but you can use it on pre-rotated views (we will then use virtually transformed images as the input). But still, since the rotation from metadata will probably not be 100% exact, the interest point-based registration should be better for multi-view alignment. It often also works if you do not have beads in your spamples, as long as there are sufficiently prominent local minima or maxima.
Best, David
Thanks a lot for this info !
I'll have to play with this as I have a dataset that has no beads (well it had beads in the wrong channel ...) and I kinda put it aside for now until I was going to have the time to write up a phase corr algorithm !
I might come bug you sometime in the future when I try it !
Hi,
Maybe this is an intended behavior ? But I have a simple 4 angle 1 timepoint dataset, and the new version of multiview reconstruction detects 4 tiles in it.
It also detects my 4 angles:
But I'm pretty sure I did not Tile my samples at any time.
Is this a normal behavior ?
PS: The dataset was generated on the Zeiss Z1