Open pavlis opened 1 year ago
For the record I think this problem is linked to two features of the multitaper method implemented in Prieto's mtspec package and possibly also the matlab implementation. I can't be sure about that later as it isn't open source like Prieto's package is. There are two fundamental issues I see that need to be addressed.
There are not other obvious issues I can see in MTPowerSpectrumEngine that can explain the differences I'm seeing above very easily. There is definitely a scaling problem, but the original docstring admits that. I need to put a link to mtspec in the docstring after I get this repaired as an important cross reference.
Made some progress on this problem, but I do not have a complete solution. The good news is the biggest differences in my implementation to mtspec are two things: (1) mtspec default uses a more compute intensive method called "adaptive weights" by default that has definite advantages but explains a lot of the differences seen above, and (2) I most definitely have a scaling problem. The later is obvious from the figures above.
I built a test comparing spectra computed from a pure sine wave signal embedded in Gaussian white noise. The figure below was computed that way for two analysis windows of of length 128 and 12832 samples with a signal-to-noise ratio of 1.0 and a sine wave with 32 cycles in the window. The time series are not interesting but the spectra do illustrate where I am on this puzzle: Note the figure has two subplot calls with matplotlib. The top pair is for the 128 sample window and the bottom is for the 32128 window BUT only showing the first 100 points where the spectral peak for the sine wave is visible. In each subplot the top is the output of MTPowerSpectrumEngine while the bottom is the output from mtspec. Note is all cases the x axis is frequency. The peaks are at the same location because I set the sample interval for the top data to 0.1 s and the lower one to 0.1n1/n2 where n1 is 128 and n2 is 12832.
This first shows that I'm pretty sure the biggest issue that led me to this post was the difference from using the adaptive weights versus a simple average for so called eigentaper spectra. These figures also reveal two residual problems. One is easy to fix the other more puzzling.
For the later I suspect some kind of weird off by one error, but it may be subtle than that. Here I plot only the first 20 samples of the 128 window result in the same format as above (i.e. mtspec is the bottom): I then computed a crude rescaling factor and computed the difference between these two curves and plotted them: So there is definitely a problem with the dc term but it is not just the dc term as there are some small differences across the range of the plot.
So, I reiterate that the MTPowerSpectrumEngine is definitely not completely broken but it has some rough edges that need to be polished away. It will take some digging to figure this out though. I may add an adaptive weighting option. I don't think it is too hard and from what I read that should be the default as it is in mtspec.
A side issue here is we should consider having a version of the mspass container that adds mtspec but with a caveat that it almost certainly won't work in parallel workflows.
One more addition. I modified the last test to use a very long analysis window. Ran the same notebook but changed the analysis window to 640000 points. Got this for the spectra from 0 to 0.25 Hz:
This is a simpler reproduction of the problem that initiated this post. It shows the problem is more insidious than the above led me to believe. The only hypothesis I have to explain this result is that the Slepian functions I'm computing with large windows are not correct. The only basis I have for that is that Prieto has an automatic switch in his code that if the number of samples exceed 20000 the Slepian functions are computed with a different algorithm. I will test that hypothesis by a temporary modification to the C++ test code for MTPowerSpectrumEngine. I'm going to save and plot the Slepian functions created with large sample sizes.
Well that was quick and revealing. The dpss algorithm I'm using definitely is failing when the length of the window gets long. What is strange is that it does produce a set of orthogonal functions, but they are not even close to the expected form for Slepians. Apparently some numerical problem that is probably well known but buried in the publications on multiapers.
I tested the dpss algorithm used in our implementation a bit further. With a 50,000 sample window the Slepian functions looked normal. By 100,000 they were clearly bad but not as ridiculous as they were for 1,000,000.
I'm going to revise a number of thing in the existing PowerSpectrum estimation.
I'll close this issue when the above are resolved and the changes get pushed all the way to master. Expect more updates - this is work in progress.
I found yet another inconsistency that does more to explain the differences in results from the different multitaper implementations.
I ran this test. I dumped the output of the dpss (Slepian function generator) in the test_spectrum.cc program (modified from current master) for 50,000 points, time-bandwidth product of 5, and 8 tapers. I get the following output where the x axis is just sample count:
I then used the dpss function in multitaper.utils (German Prieto's new python library. Note his function is mostly a wrapper on the scipy function to compute Slepians) and got this: They obviously are very different.
Now I can tell you with 100% certainty the functions generated by the C++ implementation are orthogonal functions as I added that test to test_spectrum.cc.
I discovered a routine in Prieto's library he calls dpss2. The docstring says dpss2 implements the "original Thomson approach" but has the caveat that "This is work in progress and not used." I used the same parameters as above with dpss2 and got the following that is visually, at least, the same as Prieto's dpss output:
So, this explains the subtle differences I'm now seeing in spectrum estimates using Prieto's implementation and mine with "unity" weights. What I don't yet know is if this is a fatal problem. I'm going to ask Prieto to comment on this issue either directly or through me. Next box will hopefully be that comment fo the record.
I am leaving this for the record for any current or future user's utilizing the multitaper spectral estimation code in mspass. I am the author of that set of code (in C++) so I need to fix this problem.
The issue is that it is very clear something is wrong with the code in the C++ class MTPowerSpectrumEngine with bindings to python with the same name. I discovered this while working on magnetic data I was processing with MsPASS for a paper I'm working on looking at seismic noise in the ultra low frequency band with periods of 20 min and longer. The science problem is beside the point, but what I needed to do was compute spectra I could compare to seismic data spectra computed previously with Prieto's mtspec program. Krischer has a github repository for python bindings to the original FORTRAN implementation found here. I loaded mtspec and processed the exact same signal with MTPowerSpectrumEngine and mtspec's mtspec function (package name and the main function are the same) and I got the figure below:
The blue curve is the spectrum estimation with mtspec and the orange curve the spectrum computed by MTPowerSpectrumEngine. They are obviously very different, although not absurdly so. Direct calls to the C++ class and deconvolution algorithms based on multitapers should be consider broken until I resolve this issue. I will need to do some digging and develop a specialized test to fix this problem.
Note I don't think an easy solution is to use mtspec to replace MTPowerSpectrumEngine. The reason is that mtspec python is, I think, largely wrappers to create bindings for the FORTRAN code. I doubt seriously it would function in dask/spark as it is unlikely they will work with pickle. They might, but that will also require some testing.