Closed lguyot closed 4 years ago
Hi Luc,
Thanks for your questions. As for how the paths were generated, this was not part of our work but something that the team of @lydiang carried out. So I'm afraid I do not know exactly what process occured besides what you've mentioned in the whitepaper.
As for the size of view_lookup
, the reason that the shape does not match is because you are looking at the flatmap version. The 3-d images to transform will always be shaped as (132, 80, 114), but they get projected into an image that is (136, 272). If you use the top view paths instead, the returned size should be (132, 114), since it just aggregates over the second index.
This appears to be a documentation error in cortical_map.py
, so thanks for pointing to that.
Hi Kameron,
Thanks for the info and thanks for the documentation fix. So I'll wait a little bit until @lydiang can reply here.
Best regards, Luc
Thanks to a colleague of mine, Sirio Puchet (Blue Brain Project), I have now more insight on this question. Indeed, Sirio pointed out that some details on the flattening process are available in the section Creation of the cortical top-down and flattened views of the CCFv3 for data visualization of Hierarchical organization of cortical and thalamic connectivity (https://www.nature.com/articles/s41586-019-1716-z):
A cortical flatmap was also constructed to enable visualization of anatomical and projection information while preserving spatial context for the entire cortex. The flatmap was created by computing the geodesic distance (the shortest path between two points on a curve surface) between every point on the cortical surface and two pairs of selected anchor points. Each pair of anchor points forms one axis of the 2D embedding of the cortex into a flatmap. The 2D coordinate for each point on the cortical surface is obtained by finding the location such that the radial (circular) distance from the anchor points (in 2D) equals the geodesic distance that was computed in 3D. This procedure produces smooth mapping of the cortical surface onto a 2D plane for visualization. This embedding does not preserve area and the frontal pole and medial-posterior region is highly distorted. As such, all numerical computation is done in 3D space. Similar techniques are used for texture mapping on geometric models in the field of computer [57].
[57] Oliveira, G. N., Torchelsen, R. P., Comba, J. L. D., Walter, M. & Bastos, R. Geotextures: a multi-source geodesic distance field approach for procedural texturing of complex meshes. 2010 23rd SIBGRAPI Conf. Graphics, Patterns and Images 126–133 (IEEE, 2010).
Dear contributors to the
mouse_connectivity_models
,I am looking for a specific information about the files located in cortical_coordinates.
The process by which a curved cortical coordinate system has been obtained is documented on pages 6 and 7 of the technical white paper:
Although I find the above process clear, I cannot infer from this description how the pia and the white matter surfaces have been flattened to fit in a 2D numpy array such as those of
dorsal_flatmap_paths_100.h5
andtop_view_paths_100.h5
. In other words, what is the recipe to build the arraysview_lookup
held by these files? Was some area-preserving transformation applied to the dorsal and top surfaces of the isocortex volume? If so, which one?Many thanks in advance for your help, Luc.
As a side note:
The returned shape doesn't match the expected value of (132, 114) that is indicated in cortical_map.py.