SauceLLC / sauce4zwift

Sauce for Zwift™
GNU General Public License v3.0
106 stars 26 forks source link

W'bal calculation is not time (seconds) based #79

Closed BasWeg closed 1 year ago

BasWeg commented 1 year ago

Hi Justin,

I've looked into the code and it seems that the wbal is not calculated for every second, it is just calculated for every data event. The elapsed time is float and therefor always > 0.

https://github.com/SauceLLC/sauce4zwift/blob/01f42a990be8b8fe38af50233a416c45f36b47e0/src/stats.mjs#L232

In my opinion the watt data stream needs to be interpolated for every second, or the elapsed time needs to be taken into account to calculate the correct Joule

best regards

Bastian

mayfield commented 1 year ago

Hi @BasWeg

The calls to _add are buffered into 1 second averages here, https://github.com/SauceLLC/sauce4zwift/blob/01f42a990be8b8fe38af50233a416c45f36b47e0/src/stats.mjs#L127-L148

Let me know if you still think there is an issue.

BasWeg commented 1 year ago

I think there is still an issue, since class ExtendedRollingPower extends sauce.power.RollingPower extends the RollingPower and overwrites the _add function. Of course super._add is called, but the unbuffered value is used for wbal.

https://github.com/SauceLLC/sauce4zwift/blob/01f42a990be8b8fe38af50233a416c45f36b47e0/src/stats.mjs#L225-L251

mayfield commented 1 year ago

The buffering actually happens in DataCollector so it's easy to conflate the add and _add calls but before any RollingPower.add() call is made it's buffered by the DataCollector.add() first. The data collector wraps RollingPower instances so it can manage using a single array for each riders data. All laps and stats are managed with metadata on top of a single array.

So it goes like..

_recordAthleteStats(state):
   # Every 200 to 10000ms...
   for every collector:
      DataCollector.add(...)
          if elapsed >= 1 second:
              call internal _add() -> which calls collector.roll.add() -> roll._add()
          else:
              buffer data until later
BasWeg commented 1 year ago

but if I add console.log("elapsed " + elapsed); within this for loop

                 for (let i = 0; i < elapsed; i++) { 
                     this.wBal = this._wBalIncrementor(value); 
                 } 

it shows values <1 like 0.234 why? If the data is buffered and the add only called if elapsed >1, that should not happen

mayfield commented 1 year ago

Mine are all around 1 second, how often are you seeing that?

The flush is releasing the last set of buffered values because the newest value exceeds or lands on the idealGap boundary so on rare occasion they might be less than 1 second, but I wouldn't expect this often.

What's most critical is that the number of calls to wbalIncrementor is approximately a one second cadence. It does look like because i'm doing (effectively) integer conversion on elapsed time I might be calling it too frequently for the same values, so I'll check that and revise it if so.

BasWeg commented 1 year ago

I need to double check. Is there any simulation mode, to have only athlete data from one id, or simulated values?

mayfield commented 1 year ago

Using command line option --random-watch should work for that. I usually run with npm start -- --inspect --random-watch

BasWeg commented 1 year ago

Ah, if elapsed is like 1.0003 the incrementing is called two times.

mayfield commented 1 year ago

Yeah, that's the integer conversion thing I mentioned. I'm taking statistical samples right now. Just doing Math.round(elapsed) for the for loop does not appear to be correct either.

mayfield commented 1 year ago

I'm starting to think I don't need the loop at all actually. The outer add() call will insert padding when idealGap is exceeded and zeros when maxGap is exceeded.

mayfield commented 1 year ago

The non incremental calculation looks like this... image

Which only has an inner loop if a sauce.data.Break is encountered and this is very rare (it's for gigantic breaks usually the result of a broken file). So it's unlikely to affect the outcome of the W'bal because the _wPrimeCorrectedPower call will already insert plenty of normal value pads and zero pads when idealGap and maxGap are exceeded. I.e. enough padding during normal gaps that the W'bal will reach the W' ceiling before the inner sauce.data.Break would have an effect.

The RollingPower instance in s4z is also corrected with the same RollingAverage class and has an idealGap of 1 second which is what the W'bal algo wants. I need to double check this and run more tests, but hopefully by removing the loop the incremental algo is more accurate and will have one less CPU branch to do on every data point we process.