neuroinformatics-unit / derotation

Derotate images acquired via 2-photon or 3-photon calcium imaging
BSD 3-Clause "New" or "Revised" License
7 stars 0 forks source link

More features to the `Rotator` / microscope simulation #27

Open lauraporta opened 2 weeks ago

lauraporta commented 2 weeks ago

WIP: list of possible features to add to the Rotator class or bugs to be solved

lauraporta commented 19 hours ago

Suggestions from @JoeZiminski as they were shared on the #25 PR:

I was peeking ahead to the next PR and saw the center of rotation works for the simulated but not real data. I was thinking next steps for adding further simulations (of course out of scope for this PR) that move closer to the real-world data, so will dump some ideas here in case useful:

ATM if I am understanding correctly, the field of view is not fully occupied i.e. the square sits in the center and the entire square is kept in the field of view during rotation. This is very useful as it shows how those strips appear in the derotation. As an extension, it would be interesting to see what would happen when the square fills the FOV entirely so portions of it are completely cut off during rotation. In real life this would also introduce new features into the FOV as parts of the wider image outside of the FOV will be moved into the FOV during rotation. Maybe initially the test image can be a square the size of the entire FOV but with additional black edges that move into the FOV during rotation.

I guess another difference from the real data case is there is no z-axis movement. I found it quite hard to visualise how the contents of the FOV is changing with full rotation of the mouse in 3D. Maybe It is worth making a full 3D image for simulation, this striping pattern might not work but it might work with cell shapes, similar to what you have in the next PR but with spheres instead of circles. I think a 3D image could be rotated and interpolated exactly the same as the 2D case, and then sample line-by-line from say the central plane.

For light intensity changes, it might be interesting to simulate light hitting the microscope as some function of z-axis distance from the central plane. I am learning some opengl stuff ATM to this effect and it's really cool! It might be of interest in particular the theory e.g. lighting (essentially just lots of dot products!). Maybe in the 2P microscopy setting this gets very complex very fast though.

lauraporta commented 19 hours ago

Regarding the full FOV: I see your point and makes sense. I will expand it further... The actual rotating sample is larger than the FOV, and during the rotation, different bits of it enter and exit the FOV. So, an idea could be having a 3n x 3n matrix rotating and sampling it in the centre, generating a n x n FOV. What do you think? I'll add this to the TODOs...

And yes, rotation in Z plays a crucial role and I'd love to make the 3D rotation. Although z-plane artefacts could (should?) be fixed at the data collection level. It is still useful to know the degree of out-of-plane rotation, i.e. the major axis of the rotation ellipse.

To complete the Rotator, it could be worth simulating non-rigid motion artefacts (see some examples of non-rigid deformations in Pnevmatikakis et. al 2017), and the temporal delays between the scanning of each line... I honestly don't know yet to which degree all these improvements will get my sinthetic data close to the real one.

There is also another interesting consideration. If I can reproduce the artefacts almost fully with the simulation, Rotation + Derotation translate in two known nonlinear mappings from one matrix to another. If I can identify these functions I can just summarise with it the derotation by line and just transform the volume according to it...