sdatkinson / neural-amp-modeler

Neural network emulator for guitar amplifiers.
MIT License
1.87k stars 150 forks source link

[BUG] Final prediction ESR doesn't match what's computed during the trainer's validation step #489

Open 2-dor opened 1 month ago

2-dor commented 1 month ago

Hi Steve,

I've been re-training some models with trainer version v0.10.0 and encountered something I know happened a while back.

In the "checkpoints" folder, my lowest ESR seems to have been 0.01003; two other checkpoints with ESR 0.01004.

However, when the trainer is stopped (I hit Ctrl + C in the CLI), it reports an ESR of 0.01006.

image

sdatkinson commented 1 month ago

Yeah, I've seen this.

There's a little discrepancy between how the validation step is computed and how that final prediction is run. In validation, the input is processed as-is and the output is left-cropped to match the model's receptive field; in the final prediction, the input is pre-padded with zeroes so that it's always the exact same output that's being reported on no matter what model is used (and what its receptive field happens to be)

I'll leave this Issue open because I might be able to do a bit of refactoring elsewhere that makes this all agree (I'm thinking of refactoring some of the data processing code), but I'll want to hang on a bit and do that instead of jumping in on this--using the tools that are in the code right now, the result might come out kind of ugly.

To take a step back, this discrepancy won't cause any real problems--if the difference between two models is their ability to predict a bit of silence, then that's probably not telling you which model is actually better for real 😉.