Closed TibbersHao closed 8 months ago
+
- if images.ndim == 3:
- images = images.unsqueeze(1)
This looks a bit suspicious. Could you explain when this case occurs and why we are added a dimension at axis=1 rather than 0?
i think that if you pull in N images at a time, you get an N,Y,X array, so you need to squeeze into dim 1 as to get N 1 Y X
This PR modified the place where qlty stitching should happen during training and inference.
To specify:
During training: all labeled frames and their corresponding masks will be loaded into memory and cropped into patches by
qlty
, right now thebatch_size_train
controls how many patches we want to load into device per batch. Previously this parameter controls the number of frames loaded per batch, which will likely run into memory issues with large patch numbers for big images.During inference: grab a single frame at a time, crop to patches, then use
batch_size_inference
to control how many patches we want to make prediction per batch. Previously this parameter was set to control how many images we pass into device per batch, which will give same issue as described above.Added data standardization for input images.