Closed stigersh closed 4 years ago
I think I found the problem. Lambda layers weren't designed to be used with trainable variables. Thus everything inside the lambda becomes untrainable. That's why the code with the lambda is stuck on the same accuracy value. For writing a custom layer with trainable variables this link should be used: writing-your-own-keras-layers
Summary
In the debugging process I checked the following: I have a keras model which I wrapped in a lambda. From some reason doing it made the optimization start from a worse accuracy point, and get stuck on that value, whereas without the lambda wrapper all is good.
Works bad:
Works fine:
Environment
Logs or source codes for reproduction