EricZQu / Quantifying-Nanoparticle-Assembly-States-Through-Deep-Learning

MIT License
3 stars 1 forks source link

Errors in creating conda environment #1

Closed Rajarshi1001 closed 2 years ago

Rajarshi1001 commented 2 years ago

I was following the steps in creating the conda env. Initially I have cloned the repository and on navigating to the detection/ folder, I have run conda env create -f conda-gpu.yml but the command terminated by showing the following error:

Could you please look into this issue and suggest any changes?

EricZQu commented 2 years ago

Hi, thank you for pointing that out. It seems that conda has removed support for that TensorFlow package. I have updated the yml files. You can download it again or just change the tensorflow-gpu==2.1.0rc1 to tensorflow-gpu==2.1.0.

Rajarshi1001 commented 2 years ago

Should I create the conda enviornment inside the detection directory?

Rajarshi1001 commented 2 years ago

I have successfully ran the script. I just wanted to ask where the output will be produced? the output to python detect.py --cut_size 100 --image_type tif --image_directory samples/ --output_type boxes I also wanted to add that the results are not saved inside the output folder..

EricZQu commented 2 years ago

If you are trying to use the example image, please change the --image_type from tif to png. Sorry for the typo.

Rajarshi1001 commented 2 years ago

While running the script using png as an argument, I was getting the following tensorflow related error image

I also wanted to ask that I had some TEM images with a .bmp extension. How can I use this model to detect nanoparticles on those files?

EricZQu commented 2 years ago

For the error, please install the tensorflow-gpu following the website in the error message. It differs for every PC and GPU (pay attention to the CUDA version). (You might need to downgrade numpy by pip install numpy==1.19.5)

For the new extension, you can try to convert it to one of our supported extensions, or you can modify the read image code to support that extension.

Rajarshi1001 commented 2 years ago

I have kept a sample TEM image apart from the exp already present but the code doesn't seem to detect the nanoparticles properly.

The Image I have used is KL-4-86_600 The output detected image KL-4-86_600

Rajarshi1001 commented 2 years ago

Can you suggest any ways to improve it?

EricZQu commented 2 years ago

You can try to increase the --cut_size, it should match the diameter of nano-particles (pixels) in your input. If this still does not work well, you can try to label and train the model on your own images (since the nano-particles in your image do not particularly look like ours).

Rajarshi1001 commented 2 years ago

I am suddenly unable to run the python script detect.py due to the following error image

Rajarshi1001 commented 2 years ago

Could you please specify that the numbers mentioned in the generated .txt file on running the pre-trained yolov3 model on a sample .png file are in pixels or not? I just wanted to know what do the 4 numbers physically represent which would help me in creating my own dataset? Could you please provide a reply to my doubt?

EricZQu commented 2 years ago

Yes, they are in pixels.

Could you please read the documentation carefully? Here is the documentation about the output_type:

There are 5 types of output offered in the tool. boxes: Output a txt file with basic bounding boxes in each line (x_min y_min x_max y_max) center: Output a txt file with center coordinates of boxes in each line (x_center y_center) center_size: Output a txt file with center coordinates of boxes and size in each line: (x_center y_center width*height) json: Output a json file that is compatible with labeling software "colabler" benchmark: Output a txt file for mAP calcuation ('particle' confidence x_min y_min x_max y_max)

Rajarshi1001 commented 2 years ago

Actually, the algorithm doesn't detect all the nanoparticles in the TEM image. I was thinking to create a training dataset in order to train the model with that dataset. The dataset consists of a .bmp image (which can be converted to a png file) and a JSON file for annotation. The JSON file contains labels and corresponding points which when drawn upon the image form an irregular boundary around the detected particles. image These points when drawn over a sample TEM image look like this: image

I have gone through the readme file in the training folder, the training data for this model expects the ImagePath and the starting and end coordinates of the bounding rectangle which is different from this format, Could you please tell me whether it is possible to train the dataset or provide with some guidelines to proceed with the training dataset in order to improve the performance of the model.

Could you please reply to this issue?

EricZQu commented 2 years ago

If it is different from our format, you can either change your dataset to our format or try to train some other model. I think the latter is more probable since the input data is quite different from ours. Since this issue does not concern any part of our code, I will close this.