Closed AndrewLevin closed 4 years ago
If I read #286 correctly, everything works except for whatever is writing the config file, yes? Wasn't the point of introducing the inflection finder to use this to set the threshold? Why is the seemingly good value from the inflection finding not winding up in the written configuration?
Also, in the table from this PR, it seems that every point in the "before" winds up 1 DAC value lower in the "after" (except for the two problem VFATs). Is this behaviour desired (it seems systematic)?
No, it is not just a problem with writing to the config file, but the actual algorithm used to find the threshold.
The result of the inflection point finder is not currently used, we are using the much simpler algorithm of scanning the threshold from low to high and finding the first threshold corresponding to sbitrate = 100.
It is expected that the threshold values change by 1 in this pull request, because we are now finding the first threshold where the sbitrate is above 100 instead of the first threshold where the sbitrate is below 100.
Potentially, we could continue to scan from low to high if we start from the inflection point, although it is not clear to me what that the advantage of that is compared to scanning from high to low, and of course the disadvantage is that the inflection point finder could fail.
It is expected that the threshold values change by 1 in this pull request, because we are now finding the first threshold where the sbitrate is above 100 instead of the first threshold where the sbitrate is below 100.
I think that this change affects the semantic of the function. It is changed from "the first threshold below for which the rate is below m" to "the latest threshold for which the rate was above m". Couldn't you add 1 to the dacValX
value in order to preserve the semantic?
Potentially, we could continue to scan from low to high if we start from the inflection point, although it is not clear to me what that the advantage of that is compared to scanning from high to low, and of course the disadvantage is that the inflection point finder could fail.
Agreed. It also ensures that you are on the "right side of the Sbit rate bump" for all channels.
If you carefully look at the plots below, you'll see that some channels fire very late (hence the trimming). Therefore, it could happen that the Sbit rate is below the defined target but that a channel hasn't yet reached its minimal activation threshold. I'm not sure that is a good reason by itself, but it is one among others. ;)
It is expected that the threshold values change by 1 in this pull request, because we are now finding the first threshold where the sbitrate is above 100 instead of the first threshold where the sbitrate is below 100.
I think that this change affects the semantic of the function. It is changed from "the first threshold below for which the rate is below m" to "the latest threshold for which the rate was above m". Couldn't you add 1 to the
dacValX
value in order to preserve the semantic?Potentially, we could continue to scan from low to high if we start from the inflection point, although it is not clear to me what that the advantage of that is compared to scanning from high to low, and of course the disadvantage is that the inflection point finder could fail.
Agreed. It also ensures that you are on the "right side of the Sbit rate bump" for all channels.
Sorry, you agree with which option, starting at the inflection point or scanning from high to low?
If you carefully look at the plots below, you'll see that some channels fire very late (hence the trimming). Therefore, it could happen that the Sbit rate is below the defined target but that a channel hasn't yet reached its minimal activation threshold. I'm not sure that is a good reason by itself
A good reason for what?
Sorry for the unclear answer, I was probably too tired. :-/
Sorry, you agree with which option, starting at the inflection point or scanning from high to low?
With your fix, scanning from high to low.
If you carefully look at the plots below, you'll see that some channels fire very late (hence the trimming). Therefore, it could happen that the Sbit rate is below the defined target but that a channel hasn't yet reached its minimal activation threshold. I'm not sure that is a good reason by itself
A good reason for what?
A good reason for scanning from high to low.
Rebased and retarget to release/legacy-1.5
. Also added one to the ARM DAC so that it is the same as before (although I am pretty sure it was an arbitrary choice to make it first ARM DAC above 100 rather than last ARM DAC below 100).
(although I am pretty sure it was an arbitrary choice to make it first ARM DAC above 100 rather than last ARM DAC below 100).
It may have been arbitrary, but it makes a lot of sense now. In the case of scan with the "Sbit rate drop issue", choosing the last (in this direction; first in the other direction) ARM DAC below 100 prevents working at a Sbit rate of ~10MHz.
Due to non-monotonicity at low threshold in the sbitrate vs threshold plot, if we scan the threshold from low to high and take the first instance, we may select the wrong threshold.
Description
This pull request scans for the threshold corresponding to sbit rate = 100 from high threshold to low threshold instead of low threshold to high threshold.
Types of changes
Motivation and Context
Resolves issue https://github.com/cms-gem-daq-project/gem-plotting-tools/issues/286
How Has This Been Tested?
Yes, for the sbit rate data provided in the issue above:
With the
HEAD
ofdevelop
:With this pull request:
Screenshots (if appropriate):
Checklist: