jiyuuchc / lacss

A deep learning model for single cell segmentation from microsopy images.
https://jiyuuchc.github.io/lacss/
MIT License
31 stars 5 forks source link

Set up and instructions to train and test #1

Closed VikasRajashekar closed 1 year ago

VikasRajashekar commented 2 years ago

First of all Very nice work.

Is it possible to provide the environment file to set up? Also would be nice if you update the instructions to train, evaluate, and reproduce the results.

jiyuuchc commented 2 years ago

Hi,

Thanks for your interest in our project!

We've added/updated some experimental info that would hopfully be useful:

Feel free to contact me directly for any other issues you run into.

Ji


From: Deep_Learner @.***> Sent: Friday, July 15, 2022 7:01 AM To: jiyuuchc/lacss Cc: Subscribed Subject: [jiyuuchc/lacss] Set up and instructions to train and test (Issue #1)

Attention: This is an external email. Use caution responding, opening attachments or clicking on links.

First of all Very nice work.

Is it possible to provide the environment file to set up? Also would be nice if you update the instructions to train, evaluate, and reproduce the results.

- Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/jiyuuchc/lacss/issues/1__;!!Cn_UX_p3!l20avSVFAgWJenpC7q-IyI6yXTsDzakfYkUJd7Xtg_OUMlyWxZd7FYea7gF9Hy1SEXpWCzntlLc2NEbqIFbNNQ$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AAKRPNWZALLZ2FJPOI6VP4TVUFAKPANCNFSM53VHHS6Q__;!!Cn_UX_p3!l20avSVFAgWJenpC7q-IyI6yXTsDzakfYkUJd7Xtg_OUMlyWxZd7FYea7gF9Hy1SEXpWCzntlLc2NEZinvR7WQ$. You are receiving this because you are subscribed to this thread.Message ID: @.***>

VikasRajashekar commented 2 years ago

@jiyuuchc Thanks for the reply.

I wish to reproduce your results on LiveCell. To convert the livecell dataset to tfrecords. What should be the folder structure of the dataset directory? Currently, live cell comes as below:

/livecell-dataset/ ├── LIVECell_dataset_2021 \ | ├── annotations/ | ├── models/ | ├── nuclear_count_benchmark/
| └── images.zip
├── README.md
└── LICENSE

Please let me know.

Also after that the --checkpoint parameter in test py, the value is folder path containing the following files, right? 1.chkpt.data... 2.chkpt.index 3.config.json

jiyuuchc commented 2 years ago

Your folder structure is fine. If you have already unzipped the image.zip file, you should run the script as:

create_tfrecord(data_dir, extract_zip=False)

The tfrecord files will be saved in the same directory. This script is not coded too efficiently, and will take a couple hours.

You can also download pre-generated tfrecords in the link below. This is the same link shown at the github page for other big files such as pre-trained models.

https://drive.google.com/drive/folders/1OWdll3vRcwWhuZgNoom1-BHSg0rpvZrc?usp=sharing

Checkpoints were saved at every validation point during training, so there are quite a few of them. The checkpoint names are something like 'chkpts-21'.

Ji


From: Deep_Learner @.***> Sent: Wednesday, August 17, 2022 11:07 AM To: jiyuuchc/lacss Cc: Yu,Ji; Mention Subject: Re: [jiyuuchc/lacss] Set up and instructions to train and test (Issue #1)

Attention: This is an external email. Use caution responding, opening attachments or clicking on links.

@jiyuuchchttps://urldefense.com/v3/__https://github.com/jiyuuchc__;!!Cn_UX_p3!lSeVpr39sdAmcFluw0LdmywnCAWU8PW31hkdDTw4U_j587rj7F2fP_RoTDno9EOQvQaCORlqxAuq2wBR7fg3vQ$ Thanks for the reply.

I wish to reproduce your results on LiveCell. To convert the livecell dataset to tfrecords. What should be the folder structure of the dataset directory? Currently, live cell comes as below:

/livecell-dataset/ ??? LIVECell_dataset_2021 | ??? annotations/ | ??? models/ | ??? nuclear_count_benchmark/ | ??? images.zip ??? README.md ??? LICENSE

Please let me know.

Also after that the --checkpoint parameter in test py, the value is folder path containing the following files, right? 1.chkpt.data... 2.chkpt.index 3.config.json

- Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/jiyuuchc/lacss/issues/1*issuecomment-1218137860__;Iw!!Cn_UX_p3!lSeVpr39sdAmcFluw0LdmywnCAWU8PW31hkdDTw4U_j587rj7F2fP_RoTDno9EOQvQaCORlqxAuq2wChBCm6KA$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AAKRPNTMWMW4FZVQI4C3PVTVZT53JANCNFSM53VHHS6Q__;!!Cn_UX_p3!lSeVpr39sdAmcFluw0LdmywnCAWU8PW31hkdDTw4U_j587rj7F2fP_RoTDno9EOQvQaCORlqxAuq2wA3kgEcoA$. You are receiving this because you were mentioned.Message ID: @.***>

VikasRajashekar commented 2 years ago

@jiyuuchc Thanks for the inputs.

I was able to run the supervised model on Livecell and got the following results. APs... 0: 0.7313, 0.6702, 0.6047, 0.5297, 0.4413, 0.3403, 0.2235, 0.1057, 0.0166, 0.0000 1: 0.7118, 0.6581, 0.5993, 0.5324, 0.4599, 0.3710, 0.2716, 0.1502, 0.0322, 0.0000 2: 0.8872, 0.8617, 0.8272, 0.7783, 0.7030, 0.5806, 0.3941, 0.1659, 0.0151, 0.0000 3: 0.7389, 0.6969, 0.6509, 0.6060, 0.5527, 0.4818, 0.3807, 0.2312, 0.0709, 0.0010 4: 0.7440, 0.6842, 0.6116, 0.5250, 0.4197, 0.2937, 0.1609, 0.0513, 0.0029, 0.0000 5: 0.4865, 0.4048, 0.3225, 0.2416, 0.1571, 0.0823, 0.0305, 0.0060, 0.0002, 0.0000 6: 0.8168, 0.7790, 0.7352, 0.6779, 0.5952, 0.4773, 0.3201, 0.1385, 0.0179, 0.0000 7: 0.9357, 0.9222, 0.9069, 0.8885, 0.8580, 0.7998, 0.6722, 0.4222, 0.1032, 0.0001 all: 0.7674, 0.7227, 0.6711, 0.6098, 0.5315, 0.4287, 0.2964, 0.1416, 0.0225, 0.0000 FNRs... 0: 0.2115, 0.2595, 0.3136, 0.3789, 0.4600, 0.5583, 0.6811, 0.8181, 0.9470, 0.9984 1: 0.1830, 0.2302, 0.2854, 0.3531, 0.4301, 0.5270, 0.6382, 0.7742, 0.9174, 0.9981 2: 0.0602, 0.0834, 0.1148, 0.1596, 0.2274, 0.3312, 0.4880, 0.7056, 0.9246, 0.9998 3: 0.1935, 0.2236, 0.2602, 0.2985, 0.3455, 0.4095, 0.5004, 0.6430, 0.8285, 0.9839 4: 0.1654, 0.2171, 0.2823, 0.3633, 0.4652, 0.5920, 0.7370, 0.8801, 0.9791, 0.9999 5: 0.3897, 0.4556, 0.5292, 0.6092, 0.7018, 0.8012, 0.8933, 0.9626, 0.9942, 0.9999 6: 0.1558, 0.1827, 0.2155, 0.2600, 0.3249, 0.4204, 0.5563, 0.7416, 0.9258, 0.9977 7: 0.0565, 0.0672, 0.0798, 0.0950, 0.1193, 0.1627, 0.2498, 0.4337, 0.7504, 0.9920 all: 0.1722, 0.2110, 0.2578, 0.3151, 0.3890, 0.4857, 0.6096, 0.7642, 0.9226, 0.9979

Question 1:So here, row wise 0...7 is the seven cell types and 10 columns are the thresholds [[.5, .55, .6, .65, .7, .75, .8, .85, .9, .95]]?

Question 2: For semi-supervised. I see that the check point are based on different cell types.

So the command, python experiments/livecell/test.py data logs --checkpoint model_weights/chkpt

In the case of semi-supervised, checkpoint trained on one cell type and evaluate on all cell types right?

jiyuuchc commented 2 years ago

That's correct.

Ji


From: Deep_Learner @.***> Sent: Wednesday, August 24, 2022 8:54 AM To: jiyuuchc/lacss Cc: Yu,Ji; Mention Subject: Re: [jiyuuchc/lacss] Set up and instructions to train and test (Issue #1)

Attention: This is an external email. Use caution responding, opening attachments or clicking on links.

@jiyuuchchttps://urldefense.com/v3/__https://github.com/jiyuuchc__;!!Cn_UX_p3!n-h9cgRcDCtf9ezFnWkkBoaJnFzAS8MbMIdxJ49tsrl-0QJlSAowY9ESsIq5_EN8J7YpyOdRk1QALcyzCyqtmg$ Thanks for the inputs.

I was able to run the supervised model on Livecell and got the following results. APs... 0: 0.7313, 0.6702, 0.6047, 0.5297, 0.4413, 0.3403, 0.2235, 0.1057, 0.0166, 0.0000 1: 0.7118, 0.6581, 0.5993, 0.5324, 0.4599, 0.3710, 0.2716, 0.1502, 0.0322, 0.0000 2: 0.8872, 0.8617, 0.8272, 0.7783, 0.7030, 0.5806, 0.3941, 0.1659, 0.0151, 0.0000 3: 0.7389, 0.6969, 0.6509, 0.6060, 0.5527, 0.4818, 0.3807, 0.2312, 0.0709, 0.0010 4: 0.7440, 0.6842, 0.6116, 0.5250, 0.4197, 0.2937, 0.1609, 0.0513, 0.0029, 0.0000 5: 0.4865, 0.4048, 0.3225, 0.2416, 0.1571, 0.0823, 0.0305, 0.0060, 0.0002, 0.0000 6: 0.8168, 0.7790, 0.7352, 0.6779, 0.5952, 0.4773, 0.3201, 0.1385, 0.0179, 0.0000 7: 0.9357, 0.9222, 0.9069, 0.8885, 0.8580, 0.7998, 0.6722, 0.4222, 0.1032, 0.0001 all: 0.7674, 0.7227, 0.6711, 0.6098, 0.5315, 0.4287, 0.2964, 0.1416, 0.0225, 0.0000 FNRs... 0: 0.2115, 0.2595, 0.3136, 0.3789, 0.4600, 0.5583, 0.6811, 0.8181, 0.9470, 0.9984 1: 0.1830, 0.2302, 0.2854, 0.3531, 0.4301, 0.5270, 0.6382, 0.7742, 0.9174, 0.9981 2: 0.0602, 0.0834, 0.1148, 0.1596, 0.2274, 0.3312, 0.4880, 0.7056, 0.9246, 0.9998 3: 0.1935, 0.2236, 0.2602, 0.2985, 0.3455, 0.4095, 0.5004, 0.6430, 0.8285, 0.9839 4: 0.1654, 0.2171, 0.2823, 0.3633, 0.4652, 0.5920, 0.7370, 0.8801, 0.9791, 0.9999 5: 0.3897, 0.4556, 0.5292, 0.6092, 0.7018, 0.8012, 0.8933, 0.9626, 0.9942, 0.9999 6: 0.1558, 0.1827, 0.2155, 0.2600, 0.3249, 0.4204, 0.5563, 0.7416, 0.9258, 0.9977 7: 0.0565, 0.0672, 0.0798, 0.0950, 0.1193, 0.1627, 0.2498, 0.4337, 0.7504, 0.9920 all: 0.1722, 0.2110, 0.2578, 0.3151, 0.3890, 0.4857, 0.6096, 0.7642, 0.9226, 0.9979

So here, row wise 0...7 is the seven cell types and 10 columns are the thresholds [[.5, .55, .6, .65, .7, .75, .8, .85, .9, .95]]?

- Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/jiyuuchc/lacss/issues/1*issuecomment-1225686775__;Iw!!Cn_UX_p3!n-h9cgRcDCtf9ezFnWkkBoaJnFzAS8MbMIdxJ49tsrl-0QJlSAowY9ESsIq5_EN8J7YpyOdRk1QALcz1VGPL4Q$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AAKRPNVPWQCVCAGWXBHF2P3V2YLOTANCNFSM53VHHS6Q__;!!Cn_UX_p3!n-h9cgRcDCtf9ezFnWkkBoaJnFzAS8MbMIdxJ49tsrl-0QJlSAowY9ESsIq5_EN8J7YpyOdRk1QALcyDJTO8Pw$. You are receiving this because you were mentioned.Message ID: @.***>

VikasRajashekar commented 2 years ago

Hi Ji,

I tried to run the weak supervised version of LiveCell. However, Huh7 and shsy5y checkpoints were not accessible. I can see them now.

Here are the results.

image

Question 1: I see that the results I got were a bit less than the reported ones on paper. Am I missing something?

Question 2: I would like to generate annotations that you created for livecell for weak supervision. How do I do that? What is the command sued for the same?

jiyuuchc commented 2 years ago

I tried to run the weak supervised version of LiveCell. However, Huh7 and shsy5y checkpoints were not accessible. I can see them now.

Can you be more specific about the problem with downloading? I cannot reproduce the issue.

Here are the results.

image

Question 1: I see that the results I got were a bit less than the reported ones on paper. Am I missing something?

I forgot to tell you that the default test script in the repository sets a hard threshold (line 35) to throw away all low-ranking results. This is much closer to real-life use-cases, where accuracy is generally a lot more important than recall. The results reported in the paper were obtained without this threshold, which achieved a slightly higher AP and higher recall (but sacrifice accuracy).

Comment out line 35 will allow you to obtain identical results reported in the paper.

Question 2: I would like to generate annotations that you created for livecell for weak supervision. How do I do that? What is the command sued for the same?

The weak annotations are in the tfrecord files. Specifc code responsible is in the function experiments.livecell.data.parse_coco_record() (see line 74-96)

If you want to check the numeric values, take a peek at the tf dataset object:

from experiments.livecell.data import livecell_dataset_from_tfrecord

dataset = livecell_dataset_from_tfrecord(path_to_tfrecord_file)
peek = next(iter(dataset))
VikasRajashekar commented 2 years ago

@jiyuuchc

thanks for the reply. After commenting out line 35, I was able to reproduce the results from the paper.

image

Regarding the Question 2: I would like to generate annotations that you created for livecell for weak supervision. How do I do that? What is the command sued for the same?

I had a closer look but according to the code shared, the control never goes to line 74-96 while creating the tfrecords. Am I missing something?

I took a peek as you suggested And saw that the "binary_mask" seems that it is a segmentation mask from annotations.

image

could you please let me know.

jiyuuchc commented 2 years ago

Yes you are right: At some point of the development we rewroten the parse_coco_record() function so that it no longer calls livecell_data_gen(). Instead the relavent annotation code is now within the parse_coco_record() function (around line 155-171).

Sorry for the confusion.

Yes, the binary_mask field is the image-level segmentation label. In addition, the 'locations' field is the LOI labels. The dataset contains some other additional labels, but they are not used for weakly-supervised training.