Closed BDemps closed 6 years ago
I think that this is going to require repeating the nucleic segment classification task to produce the larger lists for each image. Are you OK doing that? If so, I expect we'll do something like this:
When you do some of these manually in step (1), we can hopefully assess whether the thresholding tools in the 3D viewer can give you a good enough classification result without having to manually toggle each segment. If a fixed threshold can work well enough for multiple images, we might then consider automating it to skip the interactive viewer.
Depending on whether classification can be automated, we might also consider further development on the viewer2d tool, e.g. so you could assign some of the manual workload to student workers. This would have to involve coordinated enhancements to the server-side preprocessing, the launcher, and the viewer2d so that a reduced-resolution nucleic image can be visualized and classified on the limited client graphics hardware.
This sounds reasonable. Very happy to re-segment, and it'd be great to be able to get undergrads to do it in the future, too.
On Mon, Dec 11, 2017 at 2:04 PM, Karl Czajkowski notifications@github.com wrote:
I think that this is going to require repeating the nucleic segment classification task to produce the larger lists for each image. Are you OK doing that? If so, I expect we'll do something like this:
- Add a new flag for Image Region to distinguish our existing "nucleic" segmentation mode tasks from these new ones which are not going to be used to align the two images.
- These will have to be segmented w/ the 3D viewer since viewer2d chokes on large image regions
- The current launcher won't tell you whether you're supposed to classify the clear subset for alignment or the full set for later pairing analysis.
- Add a new "Nucleic Pair Study" table which acts much like the current "Synaptic Pair Study"
- Links to an existing "Image Pair Study" which provides the alignment matrix
- Identifies two additional nucleic regions which provide the larger pointclouds
- Extend the server automation again, to produce the registered version of these new pointclouds
- Experiment w/ some of the client-side analysis tools to see how well these clouds actually pair up
- Devise some new client-side analyses/plots to actually compare the paired nuclei intensity values
When you do some of these manually in step (1), we can hopefully assess whether the thresholding tools in the 3D viewer can give you a good enough classification result without having to manually toggle each segment. If a fixed threshold can work well enough for multiple images, we might then consider automating it to skip the interactive viewer.
Depending on whether classification can be automated, we might also consider further development on the viewer2d tool, e.g. so you could assign some of the manual workload to student workers. This would have to involve coordinated enhancements to the server-side preprocessing, the launcher, and the viewer2d so that a reduced-resolution nucleic image can be visualized and classified on the limited client graphics hardware.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_informatics-2Disi-2Dedu_synspy_issues_58-23issuecomment-2D350873556&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=Lssj6JKak_DaRnExGYcKnQ&m=LKL2yjfFePajNW5A6v5NYWANiFizHqr9pA3RwerH2og&s=_Sw4xw4s3vNMff57hEetkLiVAskdHsdomE2sjmFGF1U&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AbQStwpAQMMRq-5FDrt2GTxvqJWP9L7RM5ks5s-5Fab-5FgaJpZM4Q7tTm&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=Lssj6JKak_DaRnExGYcKnQ&m=LKL2yjfFePajNW5A6v5NYWANiFizHqr9pA3RwerH2og&s=aqnSBIQnrgN3CacOTd2KxJMqr7dV_jRQspyInhSUVCU&e= .
-- William Dempsey, Ph.D. Postdoctoral Scholar Molecular and Computational Biology Dana and David Dornsife College of Letters, Arts and Sciences University of Southern California wdempsey@usc.edu Lab #: 213-821-1818
I've got a trivial Nucleic Pair Study table for demo/test on synapse-dev... you may want to give it a quick test to see if you understand the data-entry flow. @BDemps If this is OK, we could easily replicate on the production server to start collecting data.
With this minimalist approach, we don't distinguish the two different forms of nucleic region analysis. You'd have to know that one region is used for the Image Pair Study to do the before/after alignment, while new regions would have to be added for the same images to have nucleic "survey" where you segment more aggressively. It is these survey regions you would link into the new Nucleic Study Pair record.
However, I haven't worked on the automation part to make it pretty like the existing Synaptic Pair Study table.
Automation for the new nuc pairs table is now in test on the dev server.
BTW, this should be on the prod server now. If it is still of interest, we should try it on a few images for real, and decide if we want to refine it any more before doing too many of them... @BDemps
Add a flow for nucleic records which act very much like Synaptic Pair Study, linking to an existing Image Pair Study (which uses carefully chosen nuclei to align two images), and specifying nucleic regions rather than synaptic regions for pointcloud analysis. These will be new nucleic surveys which highlight additional nuclei besides the small set of certainly paired ones used to date for image alignment.
Sub-Tasks
Nucleic Pair Study
table to record the new survey pairsOriginal Issue Text
We talked about this possibility the other day, but just to refresh your memory... Currently, we have data for a subset of all nuclei in a given image stack, where the segmenter is confident that these particular nuclei are visible in both imaging timepoints. This results in ~10-30 nuclei pairs. Would it be possible to then register the data and then determine the most likely remaining nuclei pairs from all of the remaining possible segments. (i.e., find paired segments that are less than a nucleus radius apart, for example). Then, we'd be able to see the differences in intensity between many paired nuclei within a given image to try to see if there are brightness changes related to the changes we see in synapses in the learners/nonlearners/etc.