Closed christophwelling closed 4 years ago
The efieldToVoltageConverter is used in pretty much all the tests, so these changes are tested.
The efieldToVoltageConverter is used in pretty much all the tests, so these changes are tested.
I see. I think I got confused because not tests were executed on one of the commits. Changes look good to me. I'm just wondering if we can verify somehow that the tests still work or rather only produce small differences compatible with this change before we just update the references.
I echo Christian's point about confirming the changes are small. For example, you could show a direct comparison on a waveform, one where it has been handled by upsampling, and another where it has been handled by the shift theorem?
Yes, sometimes the tests just don't trigger. I guess maybe the servers are just busy and we get what we (don't) pay for. I attached some plots to show what the fourier shift does. In the first one, I shifted the same trace by 2ns, once using fourier shift, once using np.roll. The sampling rate was 1GHz, so there were no rounding errors. In the second, I used the fourier shift to shift by 1.5ns, while the sampling rate was again 1GHz. To confirm that this case is handled correctly, I also plooted the upsampled traces. In both images, the bottom plot is just a zoomed in version of the top
Hi, thanks! In the second case, the blue and green traces are different, even if the upsampled traces are pretty similar. Do yo know why? If the time shift algorithm and np.roll are interchangeable as the first plot suggests, then I don't see why the pre and post-shift traces are different in the second case.
In the second plot, the time shift is 1.5ns, but the binning without upsampling is 1ns. Therefore, the time shift can not be done using np.roll. So after the time shift, the pulse is sampled at different places, which is why the shape seems to change. But if you compare it to the upsampled trace, you can see that at the point where the trace is sampled, both agree, as they should.
So what is the status here? Are you satisfied with the changes? The tests are failing, but I think that is only because the trace is no longer automatically upsampled in the EfieldToVoltageConverter.
I like the changes. But can you please take a look at why exactly the test fails? And maybe fix it?
There are 2 reasons the tests fail: The automatic upsampling to 10GHz was removed, so the electric field (and the channels created from it) have a sampling rate of 5GHz, which is what the CoREAS file had. Since not having to do the upsampling was the whole point of this, I think this change is fine?
After the starting bin for the channel trace has been calculated, the remainder of the time offset, that is too small to be corrected by rolling the trace, is calculated and corrected for using the Fourier shift. So these changes remove a rounding error in the trace timing, which is the intended behavior.
If I run the electricFieldResampler before the efieldToVoltageConverter and remove the time shift correction using the Fourier shift, the tests work again. So I think these are all intended changes and the best solution is to update the reference.
Ok, updating the reference works for me.
yes I agree, we need to update the references. But before doing that, can you manually check that the differences are small? It should be below 1% change in amplitudes.
Mismatched elements: 1 / 1 (100%) Max absolute difference: 0.00766866 Max relative difference: 0.08711172 x: array(0.080364) y: array(0.088032)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "compareToReference.py", line 115, in
Mismatched elements: 1 / 1 (100%) Max absolute difference: 0.00766866 Max relative difference: 0.08711172 x: array(0.080364) y: array(0.088032)
The difference is 8%, that seems pretty large.
what quantity is that?
Sorry, missed the last line: TRACE: ROOT -> '0' -> 'station_parameters' -> 'channels_max_amplitude'
8% difference in max amplitude doesn't seem good.
thats weird especially because the first few events work fine but then we suddenly get a 8% difference. Also this test tests coreas input right? And we also didn't upsample traces? So maybe it is expected (?)
I did a pull from master and things improved. Looks like the changes to the randomState in the channelGenericNoiseAdder were not on this branch. If I also add a channelResampler to upsample to 10GHz (which used to be done inside the efieldToVoltageConverter) the difference drops to 0.1%. So I would say it looks good now.
I don't understand your comment that differences drop to 0.1% with 10GHz upsampling. Shouldn't the results not change at all with upsampling as the traces are shifted correctly now? What might be the reason is that the maximum amplitude is just not sampled correctly because the traces are downsampled after the efieldToVoltageConverter.
Previously, the efieldToVoltageConverter upsampled the E-field to 10GHz, so all channels had as well. Now this is not done any more, so the channels have a different sampling rate and number of samples. This causes the noise adder to generate different noise
I also started some of the NuRadioMC tests. There are some tests without noise that should agree pretty well. Also the Veff test should pass.
The SingleEvent tests for NuRadioMC are of by less than 1%. The Veff test is off by 1 triggered event (which is displayed as being 341 sigma for some reason) Close enough?
Yes I think we're good to go.
Can you approve the changes then?
Adds method to the baseTrace class that uses the fourier shidt theorem to apply a time shift. Uses this method for the efieldToVoltageConverter to reach high timing accuracy without need to upsample