Open victor158128 opened 6 years ago
When the code is tested, the initial state must be same for every test. Otherwise you will never know what happened during any parameter tuning. Therefore to ensure a fixed initial stated rand/rng are used. Dont vary your kernel size. Keep it consistent. And then see the outputs. Flawed, I dont know but implementation details may vary for random state generators. I went through all this mess when I translated the Matlab of this repo to Python.
@wajihullahbaig Thank you for the thorough explanation. I tried different both rand() and rng() with different seeds. Like you said, they give you different initial states. To ensure the initial state is consistent, use the same function and seed. All make sense now. I still don't understand how the error rate is affected by random number generator functions with the same kernel size. For example, when the kernel size is set to 7 or 15, with both rand('state', 0) and rng(0) produce error of 35%. When the kernel size is set to 11, rand('state', 0) produces 35% error. rng(0) produces 99% error.
What happens is that for a random initial state with fixed seed, the training/testing batches are always selected the same. This keeps the testing consistent. If you change kernel size, you are then bound to have different accuracy. Plus, try changing depth of the networks, and you will end up with different accuracy. So many tuning parameters that once can fiddle with,
there is a line in test_example_CNN.m that says "rand('state', 0)". When I looked it up on MATLAB, it is discouraged to use it. Instead, "rng(0)" is recommended. But when I used "rng(0)" and varied my kernel size, the error rate is vastly differently from using "rand('state', 0)". Is every generator other than "twister" flawed? Does that mean the result that "state" generator gives is also flawed?