BIDS-collaborative / brainspell

brainspell is a web platform to facilitate the creation of an open, human-curated, classification of the neuroimaging literature
https://gitter.im/BIDS-collaborative/brainspell
2 stars 6 forks source link

How do you find the coordinate-based viewer? #5

Closed davclark closed 8 years ago

davclark commented 8 years ago

@r03ert0 gave a demo of this, but I don't see how to access it. Is it a "private" feature?

r03ert0 commented 8 years ago

There are three viewers:

  1. stereotaxic viewer: this viewer appears when you search for a term, and shows the sum of all coordinates in all experiments that respond to the search query. The viewer shows 2D slices of a brain with a red/green colormap showing the number of coordinates mapping each region. There are 3 different views: axial (horizontal plane, normal up), coronal (plane with normal to the front), sagittal (normal to the right). The code for this viewer is in brainspell-search.js, from line 41 to 299.
  2. translucent viewer with isosurface: this viewer shows a translucent brain with a red blob representing the regions mapped by the search query. An isosurface is built in real time (surface nets algorithm), and the threshold is fixed with a slide. The 3D viewer used three.js, with a shader for the transparency (which only works in computers with a GPU). The native brain mesh has very little faces, and is subdivided in the client for speed. The code is in brainspell-search.js, from line 300 to 704.
  3. translucent viewer with coordinates as spheres: this viewer appears in the 'article' pages, close to each coordinate table. Each spheres corresponds to a row in the table. Clicking on a row highlights the sphere, clicking on a sphere highlights a row. The code is in brainspell-article.js, lines 883 to 927, 1497 to 1662 (mostly)
davclark commented 8 years ago

Let's do a hands-on demo of all three at today's meeting?

jbpoline commented 8 years ago

Yes - sounds good !

davclark commented 8 years ago

@r03ert0, I think maybe I got confused about another viewer you showed us that first Thursday meeting. Do you recall what web app you pulled up where you click on a voxel and you get an ordered list of keywords?

r03ert0 commented 8 years ago

That was the coactivation map viewer app. It's here:

https://github.com/r03ert0/CoactivationMap.app

There are 2 parts to it: the map and the viewer.

The map: is a precomputed, zip-compressed archive of co-activation 3D volumes: for each voxel in the brain there's a 3D volume with the number of papers that contain the seed and each brain voxel. In other words, if the seed voxel coords are x1, y1, z1, and the destination voxel coordinates are x2,y2,z2; the map is k=coincidences(x1,y1,z2,x2,y2,z2). Inside the zip-compressed archive there is also a sum.img file that contains the total number of papers that map each brain voxel. Using both maps it is possible to know, for each pair of voxels in the brain, a: the number of papers mapping the first voxel, b: the number of papers mapping the 2nd voxel and k: the number of papers mapping both. Using that information it's possible to compute many stats: correlation, LR, MI, etc. Also in the zip archive is a list of the top keywords associated to each seed voxel. To compute this list I took one by one all the tags in brainspell, and I made for each tag a map adding up all the coordinates in the papers that contained the tag. Next, for each seed voxel in the coactivation map I computed the correlation between its coactivation map and each of the tag maps. I sorted the tags by R^2 and selected the top

  1. Each list goes into a text file associated to each position in the brain.

The viewer: a very simple, mac-only, viewer for the coactivation map. It shows a brain, lets the user click on the brain image, uncompresses the corresponding coactivation volume from the big zip archive, and computes a LR map (there's code to compute other stats, but for the moment the LR is hard wired). It also displays the list of tags associated to each coactivation volume as links. Clicking on a link launches brainspell, which displays all the papers containing the tag.

it shouldn't be hard to move the coactivation map viewer app to the web, and it would be very useful to be able to select papers by clicking on a brain map in addition to the text search that is actually implemented. The only issue would be the speed. Each individual map is not big, but how could we achieve the same fluidity as in the desktop viewer? Maybe instead of downloading the files one by one there could be a direct websocket communication between server and client? Maybe each volume can be further compressed by using a DCT? (I started to try this, and have code for a 3D discret cos. transform) Maybe there is a smart predictive scheme to incrementally load the volumes the user is most likely to query next given its current position? (for example, start downloading the volumes associated to seed voxels in the currently displayed brain slice; probably download only every other map and display an interpolated image during the time it takes to download the real data?) My ideal solution would be to compress the complete 6D map k(x1,y1,z1,x2,y2,z2) using a 6 dimensional DCT: the coactivation map is very smooth and regular, my guess is that the complete zip archive (2GB) could be easily compressed to a few MB, and the map could be downloaded progressively and refined.

all the best, roberto

On Sat, Oct 17, 2015 at 12:07 AM, Dav Clark notifications@github.com wrote:

@r03ert0 https://github.com/r03ert0, I think maybe I got confused about another viewer you showed us that first Thursday meeting. Do you recall what web app you pulled up where you click on a voxel and you get an ordered list of keywords?

— Reply to this email directly or view it on GitHub https://github.com/BIDS-collaborative/brainspell/issues/5#issuecomment-148847514 .

r03ert0 commented 8 years ago

I hope my previous comment answers the question, I'll close the issue