Open Kobe972 opened 2 years ago
I don't have the time or expertise to implement such a change, but I would welcome any contribution.
What approach did you imagine?
I'm thinking it would be easiest if there were some data that paired 3D scans (or NERFs or photographs or video) with DICOM imagery.
You could maybe also downscale the data and use the upscaled segmentation as training output for the downscaled input.
A first step might be collecting some data, then turning it into raw tensors (arrays). Then maybe it can just be run through a trainer.
I think the best way to represent meshes in machine learning is still undecided in research, I could be wrong, but finding current work around that could be helpful too. NERFs or photographs seem to be pretty common.
Thresholding method can't segment the dicom precisely, and I guess machine learning may do a good job. It seems that there's no well open sourced repository using machine learning to do the segmentation. It's not difficult to breakthrough and the repository will win much more stars if done this. I'm trying to do so.