Problem: when running the same code multiple times with the same seeds, there are small numerical differences that arise over the course of training. This is fixed if array sizes are powers of two.
Suggestion: Use array sizes that are powers of two for now
Eventually I would like to implement a workaround (if tensorflow doesn't have a way to activate a built in one) where if the array size is not a power of two, in the background an array with dimensions that are powers of two is made and unneeded entries are set to 0. If this is relevant to you and you want to work on that workaround please do (and drop a comment here so people don't duplicate work).
Problem: when running the same code multiple times with the same seeds, there are small numerical differences that arise over the course of training. This is fixed if array sizes are powers of two.
Suggestion: Use array sizes that are powers of two for now
Eventually I would like to implement a workaround (if tensorflow doesn't have a way to activate a built in one) where if the array size is not a power of two, in the background an array with dimensions that are powers of two is made and unneeded entries are set to 0. If this is relevant to you and you want to work on that workaround please do (and drop a comment here so people don't duplicate work).