pi-null-mezon / OpenIST

Instruments for CNN training, based on Qt, Opencv and tiny-dnn projects
14 stars 4 forks source link

It might be helpful to include some instructions on how to compile and use. #1

Open pliptor opened 7 years ago

pi-null-mezon commented 7 years ago

Yeah! Unfortunately I have no time for this... Can I delegate this task to you?

pliptor commented 7 years ago

We could try, but right now I have not much idea where to start. I know a little bit about QT and how to compile tiny-dnn examples but not much idea how your system works.

pi-null-mezon commented 7 years ago

Ok. Let's start from the original idea. I have started to work this project when I have read article about SegNet architecture. The authors have used Caffe or Torch for their experiments. At that time I was already be familiar with tiny-dnn and opencv, but have not work with Caffe or Torch, additionally I have wanted to make unified crossplatform solution. So, I've grabed all my knowledges and start to implement SegNet architecture by means of the tiny-dnn (at those time it was tiny-cnn). Opencv was used as proven instrument to crop/rotate/rescale/visualize images and Qt as development framework. As tiny-cnn have no serialization facilities (now tiny-dnn has), I have implemented serialization by mens of opencv's FileStorage facility. When I have seen that my SegNet-like neural network works for lungs segmentation I have started to work on the classes that can work with neural networks for ordinary multiclass classification. Eventually I have made some experiments with tuberculosis diagnostic by means of neural network processing of the xray images of the lungs, have got something about 86 % of the right results. Then my boss have started to think of how to intergate this technology in business process...

Before I can start to guide you through build steps, let's talk about what you want to achive by means of this project? There is a lot of more powerfull libraries for the deep neural networks learning, with good documentation and wide communities. Have you researched this field?

pliptor commented 7 years ago

Thank you for your time explaining in detail the background and motivation of your work.

I started playing with tiny-dnn a few weeks ago and read in one of your support request that you obtained very good results experimenting with it and understood you were using it for image segmentation. Someone was looking for support in tiny-dnn what appeared to me a form of image segmentation too so I referred him to your project.

Yesterday I tried to see how your project worked and tried to compile it. That's when I realized it may require some additional information on how to build it.

I was not planning to use your project myself. I was just interested in seeing how it looked like overall and thought about leaving a note here as others may try to compile your project too. If you prefer to wait for someone that really needs the project to compile, that's fine. No, I haven't researched this field much yet. I'm trying to learn the basics now.

pi-null-mezon commented 7 years ago

What operating system do you use?

pliptor commented 7 years ago

I'm using Linux (ubuntu 16.04LTS 64bit platform). I don't have any GPU or other accelerators so I can't test anything heavy.

pi-null-mezon commented 7 years ago

Have you already installed Qt Creator, Opencv (>300) and downloaded last snapshot of the of the tiny-dnn?

pliptor commented 7 years ago

I have:

pi-null-mezon commented 7 years ago

Ok, then let's start from the segmentation. Clone OpenIST repo, then in OpenIST\CMDUtils\SegNet open SegNet.pro by the Qt Creator. Adjust path in the tinydnn.pri to the sources of the tiny-dnn. Then try to build and let me know what errors you will have got.

pliptor commented 7 years ago

SegNet compiles after a few changes. I opened pull requests.

pi-null-mezon commented 7 years ago

Ok, now you should be able to run training or segmentation task. For training you should download the training set, for the instance this one, also you can find additional links here. In the dataset, the naming convention is following - raw image files can have arbitrary names but the corresponding label images should have '@' symbol at the end of the filename. Png, jpg and bmp files should be supported, depends on opencv installation. When dataset will be ready, run SegNet utility with the proper cmd arguments. For the instance:

SegNet -i[path to the directory with training data] -o[output file name with extension .xml or .yml] -e25 -m4 -r256 -c256

than application should start, upload dataset into RAM, shuffle images and starts trainig (-e25 means 25 epoch or how many times all dataset will be used for network weights update, -m4 means that minibatch will contain 4 images i.e. weights update will be performed each time 4 images will be forwarded, -r128 -c128 means that before training will be started all images will be resized to the size 128x128, if you resize to the low resolution training time becomes less, but network lose to see fine details), in the end of each epoch application will print output some statistics about training process and updates image with activation on the last layer for the last image in the epoch (and you should see, that from the early epoches to the end the segmentation results should become more accurate). After training will be finished you will be able to use saved network weights for image segmentation:

SegNet -n[saved file with the network weights] -s[file name of the image that should be segmented] -a[where to save the segmented image]

Pretrained network for the lungs segmentation could be found here.

pliptor commented 7 years ago

Thank you for the detailed explanation. Yesterday SegNet successfully built but I noticed I was getting segmentation fault when trying to run it. I found the problem.opencv highgui was linking against libqt4-test. SegNet was crashing on libqt4-test. I removed the library from my system, rebuilt opencv so that highgui linked with qt5. SegNet runs now. I'll try your tests.

pliptor commented 7 years ago

I tested:

Everything looks functional. I don't know if the output quality that I get is expected though. I trained with very small sample window size but I get more contrast compared to the pre-trained model.

./SegNet -n../../Sharedfiles/Pretrained/SegNetForLungs.yml -s../../Sharedfiles/Pretrained/LungsExample.jpg -aOUTPUT.jpg

output

./SegNet -nTestOut.yml -s../../Sharedfiles/Pretrained/LungsExample.jpg -aOUTPUT2.jpg

output2

pi-null-mezon commented 7 years ago

Yes, you have got result that was expected. Also it seems that my pre-trained network was trained by the previous version of the tiny-dnn (0.1.1) and I forgot to update it.

pliptor commented 7 years ago

Thank you. I think we now have enough information here for someone to build and use a basic form of the code and get started.

pi-null-mezon commented 7 years ago

Great. Thank you!

pliptor commented 7 years ago

If you update the pre-trained network I can cross validate it and upload a new output jpg with a better contrast for verification.

pi-null-mezon commented 7 years ago

Try this weights

pliptor commented 7 years ago

Thank you. I'd say this looks impressively good. Nice job!

./SegNet -n../../Sharedfiles/Pretrained/SegNet_for_xraylungs.yml -s../../Sharedfiles/Pretrained/LungsExample.jpg -aOUTPUT3.jpg

output3