Open dhuppenkothen opened 3 years ago
@abigailStev @matteobachetti @dhuppenkothen I think the torch.fft module was implemented mainly because
PyTorch FFT implementation provides support for parallelized FFTs using the torch.nn.DataParallel wrapper, which allows users to perform FFTs on multiple GPUs. This can be useful for accelerating FFTs on large datasets or for performing FFTs in real-time applications.
But I considering the fact how can we make stingray fast by using the torch.fft module.I would like to discuss more on this topic
Should try and do some performance tests with pytorch. I'm wondering if there are other recent insights from computer science that could make our code faster/better? Should do a bit of a literature search on what made the PyTorch folks implement FFTs.
For reference, see the PyTorch documentation