DevoLearn / devolearn

Accelerate data driven research on embryos with deep learning models :microscope:
MIT License
43 stars 31 forks source link

Migration of cv2 read and write operations (videos and images) to decord and imageio #52

Closed aswarth123 closed 3 years ago

aswarth123 commented 3 years ago

I was successful in migrating the code to decord but the I have a concern like when I am using decord there is a change in the predictions (the predictions are different when I use decord and when I use cv2) so can you tell me whether I should move on following the same procedure for lineage population model also or should I make changes to my approach. #43

Mayukhdeb commented 3 years ago

Hi @aswarth123 , thank you for this PR. So what you mean is that the I/O pipeline is working, but the model is giving wrong predictions right ?

Unfortunately our unit tests dont cover the outputs of the models, so this is something that we have to enforce manually (for now)

Can you try to find out why the model is returning wrong predictions ?

A few initial guesses would be:

I encourage you to clone your fork of the repo into a colab notebook and test it out.

aswarth123 commented 3 years ago

I will look into it and will update you soon.

aswarth123 commented 3 years ago

I fixed the problem actually There was a need to convert the ND array to numpy array but I was using np.float32() as well which was of no use now the predictions are same as before. Will be pushing the lineage population model within few days

aswarth123 commented 3 years ago

@Mayukhdeb I migrated the code to imageio/decord have a look at it once.

Mayukhdeb commented 3 years ago

Hi @aswarth123 , I installed your fork into a colab runtime to compare it with the current models. It turns out that the changes that you made is somehow leading to incorrect predictions on videos.

We have to be absolutely sure that whatever changes we make do not affect the performace of the models in the backend, unfortunately that is the case here.

The following is a comparison between your fork and the original model pipeline:

Your fork:

image

Original model:

image

As you can see, the model is failing to make right predictions on a lot of the frames, so for now I guess you can either try to fix this or keep it on hold.

The fault on our end is that while our tests make sure the pipelines work, they do not test the actual performance of the models, so this should be an issue that I'll raise soon enough.

cc: @balicea

aswarth123 commented 3 years ago

@Mayukhdeb the issue here which I saw is that some of the images for the embryo_timelapse.mov are having a color scheme of RGBA when I am converting them to RGB and then to grayscale I am getting that. But in your previous one you were doing that using cv2 and it was working fine. I think that needs to be seen properly was it working fine for the seg_sample.mov?