DeepTrackAI / DeepTrack2

DeepTrack2
MIT License
162 stars 49 forks source link

DATA INPUT and MODEL LOADING #203

Open avi-xd opened 9 months ago

avi-xd commented 9 months ago

HI,

I am trying to replicate the paper example of 'inline_holography_3d_tracking'. The code is working fine but I am facing few issues. I would be very thankful if you guys can help.

  1. I am trying to save the trained model usingsave.model(my_model) , but when I am loading the weights using load.weights('my_model'), I am facing this error. "ValueError: Unable to restore custom object of class "MeanMetricWrapper" (type _tf_keras_metric)." Is there any way to save and load the model. I have tried h5 file , and it gives the same issue.

  2. I am having a measurement video of holopgrahic particles on which I want to use this U_net model. How should I input my video. Does it need some kind of preprocessing. I am confused because in the example the input dataset is having the video in .mat format while my video is in .AVI format. Also, in the example there are traces and mapping. How should I get it for my dataset?

  3. For the traces of the particles do I need to use the MAGIK model? is it possible to do with U_net?

Thank you for your time and this framework.

avi-xd commented 9 months ago

@BenjaminMidtvedt @JesusPinedaC

BenjaminMidtvedt commented 9 months ago

@avi-xd

  1. If you are just loading the model for evaluation, you can do load_model("my_model", compile=False).
  2. You should load your video as a numpy array, using for example cv2. You should format your data to be of shape `(timestep, width, height, 1) iirc. You might need to normalize the data depending on the format. Look at the pixel intensity histogram of the simulations and ensure it matches decently with the histogram of the experimental data.
  3. U-net will not do tracing. The simplest is to use hungarian method, or a pretrained MAGIK model. @JesusPinedaC can help with that if needed
avi-xd commented 8 months ago

Thank you for the reply!! It was really helpful. I had a few more doubts, so I generated the synthetic data using my optical setup details. But the images have number of features = 377. I didn't get the logic of taking number_of_features as int(MAX_Z / Z_SCALE - MIN_Z / Z_SCALE) . Because of this high no. of features, The output layer of the model has a large no. of nodes, and I am not able to train the model due to memory issues even with (500GB CPU cluster and 4 RTX 2090 GPUs, each with 11GB memory).

I am getting such a high number of features because my setup resolution is 0.106µm/px. Hence, the pixel size is 5.86µm with 55x magnification, and the (MAX_Z - MIN_Z) is just 40µm. So if put Z_scale = 0.106e-6, No. of features become 377, and if I take Z_scale = 5.86e-6, No. of features become 6, which is very low. Do you guys have any suggestions on this? @BenjaminMidtvedt

avi-xd commented 8 months ago

Hi, Can you please tell how one can get the 'ProcessedField', 'Traces' and 'mapping' for their video data, I am not able to find any lead from the papers about any numerical algorithms for pre-processing of data before providing it to the model. In a 3D holography model, it seems necessary to have those for the model to work. Any lead would be of great help. @BenjaminMidtvedt @JesusPinedaC