Closed mwk088 closed 2 years ago
If you want to free all memory allocated by FFTW, you should call fftwf_cleanup()
at the end of your program, as documented in the manual. You also need to first call fftwf_destroy_plan
, which is currently commented out in your code above?
Our program runs indefinitely. That we are seeing a continual memory growth over time and that valgrind is pointing us to fftw, hence our attempt to use cleanup to some degree of regularity.
If the fft plan continuously adds wisdom over time, how do we “cleanup” with something that runs forever?
fftwf_destroy_plan
after you are done with a plan, otherwise you have a memory leak. (Presumably you should do this in the destructor for Framework_Fft
.)Thanks for the feedback. Currently, we create a plan once inside 'initialize' upon startup and use that plan over and over with FftComplex being executed on streaming data constantly. I have updated what the code now looks like above. And below is what Valgrind tells me about the fftwf leak. Is this an incorrect reporting by Valgrind?
==20853== 19,080 (96 direct, 18,984 indirect) bytes in 4 blocks are definitely lost in loss record 14,519 of 14,694
==20853== at 0x4C3C486: memalign (vg_replace_malloc.c:1517)
==20853== by 0x14C83BE4: fftwf_malloc_plain (in /usr/local/lib/libfftw3f.so.3.3.2)
==20853== by 0x14D60E21: fftwf_mkapiplan (in /usr/local/lib/libfftw3f.so.3.3.2)
==20853== by 0x14D67B25: fftwf_plan_many_dft (in /usr/local/lib/libfftw3f.so.3.3.2)
==20853== by 0x14D66F76: fftwf_plan_dft (in /usr/local/lib/libfftw3f.so.3.3.2)
==20853== by 0x14D66D85: fftwf_plan_dft_1d (in /usr/local/lib/libfftw3f.so.3.3.2)
==20853== by 0x566E72A7: Framework_Fft::initialize() (Framework_Fft.cc:17)
==20853== by 0x566E7380: Framework_Fft::FftComplex(unsigned long, std::complex<float> const*, std::complex<float>*) (Framework_Fft.cc:40)
==20853== by 0x55ACDBA1: CPhy_Dft_Ofdm_impl::Phy_Dft_Ofdm_ac(CState_BurstStateParams_impl*) (CPhy_Dft_Ofdm_impl.cc:202)
==20853== by 0x55B6449C: CState_NonHt_Ltf_Ofdm_impl::Process(unsigned long, std::complex<float>**, CState_BurstStateParams_impl*) (CState_NonHt_Ltf_Ofdm_impl.cc:507)
==20853== by 0x55B68DFC: CStateHandler_NonHt_Plcp_Ofdm_impl::Process(unsigned long, unsigned long, std::complex<float>**, CState_BurstStateParams_impl*) (CStateHandler_NonHt_Plcp_Ofdm_impl.cc:191)
==20853== by 0x55805DDB: gr::wifi_ac::CGRHandler_Ofdm_Plcp_impl::handler(boost::shared_ptr<pmt::pmt_base>, std::vector<void*, std::allocator<void*> >) (CGRHandler_Ofdm_Plcp_impl.cc:293)
And below is what Valgrind tells me about the fftwf leak. Is this an incorrect reporting by Valgrind?
As I said, the plan creation process allocates some memory for cached information. (Indeed, Valgrind is reporting the memory as being allocated by fftwf_plan_dft_1d
, which you say you are only calling at the beginning.) This is only de-allocated if you call fftwf_cleanup()
(which you would do at the very end of your program, after all of your plans have been deallocated, as explained in the manual). (Of course, at that point there is no point in doing the cleanup except to silence valgrind, since the OS will deallocate everything for you when your program executes.)
Don't call fftwf_cleanup
each time you execute an FFT! Do it at the end after you are done with FFTW and have destroyed all your plans!
In any case, this is only done during planning, not during fftwf_execute
, so it cannot be responsible for "continual memory growth over time" if you only construct plans once at the beginning of your program.
PS. No need for a mutex in FftComplex
, since fftwf_execute
is thread safe. You only need a mutex in planning routines, e.g. in initialize
and any destructor.
Much appreciated, thank you for the help.
Hello, I am seeing a memory leak with my code, and valgrind is pointing to the FFTW function, however, I am unsure if this is true. I have pasted the code in which the plans are created, executed and destroyed to view and see if I am handling FFTW incorrectly. Also, this is a multithreaded application I am running.
Any help would be greatly appreciated
Thank you Mark