Closed bjuergens closed 3 years ago
iirc, dann haben einige paper geschrieben, dass virtual batch normalization und weight-decay den gleichen effect auf das training haben.
Virtual Batch normalisation könnte mMn besonders interessant sein um den Featurevector von Autoencodern zu skalieren
closed because inactivity. feel free to reopen, if we want to adress this again
first decide if this is even relevant when working on pixel data
I suggest, that this can be done as a generic wrapper for all envs. This would be implemented like this:
To make sure each worker produces the same normalization, the "random agent" is not truly random, and instead very deterministic. It is essential, that the referenz obervations are always identical.