Open maartenterpstra opened 1 year ago
Hello @maartenterpstra, I think the issue is due to the table-based NUFFT, which is used inside tkbn.calc_toeplitz_kernel
.
Does every sample have a different trajectory? If they're all the same, you could apply NUFFT outside the dataloader.
Hi @mmuckley. I was also thinking that as a workaround I could compute the NUFFT for a single batch outside the dataloader. In general, every sample has a different trajectory but the same number of spokes. Would this be possible?
Hello @maartenterpstra, it may be more efficient to loop over the list or use a batched NUFFT. The batched NUFFT is good for a large number of small NUFFTs. You can see how to use it here.
I also opened #74 as a potential enhancement with a pointer to where the code controls threading if you'd be interested in that route.
Hi,
I'm trying to perform on-the-fly data undersampling in my PyTorch dataset. To do this, I perform a Toeplitz NUFFT in the
__getitem__
function of myDataset
class. This works as expected. Now, I want to to batching, so I wrap the PyTorchDataset
in a PyTorchDataLoader
. This works as expected whennum_workers=0
. However, whennum_workers
is non-zero, computation of the NUFFT seemingly enters an infinite loop.Expected behaviour
Performing a NUFFT in parallel using multiple workers should result in undersampled images.
Observed behaviour
Sampling the dataloader results in a hanging script, seemingly entering an infinite loop.
Extra information
Minimal example