Open ajinkya-kulkarni opened 1 month ago
Hi @ajinkya-kulkarni, you could provide a copy of your 2D images and create a time-lapse as:
video = np.stack([image] * 5)
and run ultrack on it as a timelapse with max_noise
above 0, for example config.segmentation_config.max_noise = 0.01
.
Then you select a single plane from the output video, they should be very similar because they are the original image with a bit of noise.
The noise is necessary because all of your images are the same.
Interesting, thanks, will try it out!
Hi @JoOkuma, planning to try your suggestion today. Do you have a minimal example notebook I can try out for just getting the masks from my images?
Hello team, thanks for the repo! I was wondering if there is a way to get only the instance segmented masks instead of the added tracking as well. I find the ensemble segmentation idea really useful, and would like to ideally train ultrack on a custom dataset (2D images and their 2D instance segmented masks), and infer the trained model on unseen 2D images.