jbonato1 / ASTRA

Astrocytes Semantic Segmentation
Apache License 2.0
5 stars 1 forks source link

How to get started with inference #4

Open PTRRupprecht opened 2 weeks ago

PTRRupprecht commented 2 weeks ago

Hi,

I'm currently trying to run ASTRA on my own data. My plan is to use the existing pretrained models on my own data. Therefore I looked into the script Inference_Pipeline10_iter.ipynb and tried to adapt it to my own dataset. However, I run into a couple of problems and questions where I need help.

  1. First, the script includes a line sys.path.insert(0,root_folder+'/RASTA/modules/'). I replaced it with the path of the repository: ASTRA-master\ASTRA\modules, that is probably fine?

  2. Second, there are two lines where I'm less certain how to modify them:

fov_DNN_weights_folder = root_folder+'/weights/dense_up' set_dir=root_folder+'/set1/'

There is a dense_up.py file in the modules/models/ folder. Should I insert the path to this file, or to another file? For the set_dir, am I supposed to insert the folder ASTRA-master\set, or something else?

  1. I find it hard to guess what kind of input format the script requires. Is it raw tif files? How should they be named or organized? It seems to me that there are some folders with integer names (I'm referring to the line test_folder_str =str(jj)), but it is not clear to me whether these are folders with single tif files that will be concatenated or folders with different recordings that will be processed separately.

As you can see, I'm really struggling to put the different pieces together and understand the intended logic of the pipeline. If you can provide some guidance on these questions (or also general guidance on how to approach the usage of ASTRA), I would highly appreciate your input.

Best wishes, Peter

jbonato1 commented 2 weeks ago

Hi first use notebook/Inference_Pipeline.ipynb notebook, 10_iter is an old version. This is an example using the data structure that i had in my dataset. Feel free to modify the code and adapt it to yours

YYY is the path for the weights that you have downloaded from drive model.load_state_dict(torch.load(YYY))

XXX is the path to your recording in stack = io.imread(XXX).astype(np.uint16) The recording should be in tiff format

There are several parameters to set which depends on the experimental conditions. Can you share them with me so i can guide you ?

Best JB

PTRRupprecht commented 2 weeks ago

Hi @jbonato1,

Thanks for the quick reply! I only have access to the computer during the week, so I will only be able to test this out next week.

For the experimental conditions:

Is there anything else that you need to know?

jbonato1 commented 1 week ago

Hi @PTRRupprecht the pixel size is roughly 1,26 um/px, you can try with the set of weights obtained from the training with 1um/px FOV. I don't know if it will be sufficient. It will be easier if you concatenate all the tif files into a single stack. Are the videos already motion corrected?

I can try to upload a pipeline ready for your data with the info you gave me but I'll have time in 2 weeks

PTRRupprecht commented 1 week ago

Hi @jbonato1,

Thanks for your reply. The videos are already motion-corrected, so I will set the flag in ASTRA to not do another motion correction.

I've now played around with the notebook/Inference_Pipeline.ipynb file. There were a few changes necessary because it worked at all. I'm writing them down because it might be useful for others who encounter similar problems:

This procedure worked fine on the test dataset provided via Google Drive. I have then tested the same algorithm (with the 1 um-pixel model) on my own data. It did not yet detect anything, but I guess I will have to play around with thresholds and parameters. The main difference is that in my data, there is much more shot noise. I hope I will figure out which parameters are essential to make this work.

Best, Peter