Closed ghost closed 6 years ago
I checked Source code of Gaussian layer of keras and it uses back-end of keras which means here tensor-flow.So,setting seed of tensor-flow will always generate same noise.
Still I will this issue for one day and then i will close it.
So you're saying same noise is generated during training? And can you explain me the implementation of the normalization layer from the paper? Because I didn't understand it.
implementing normalizing layer was hardest part of project.First I thought it was batch normalization but i was wrong about that. If you check paper corretly their is formula given for normalization in activation/layer functions table. Here Normalization means simply dividing by it's norm vector which means it becomes unit vector and unit vector has power 1.
Okay cool I'll check it out. Thanks man. How did you select both training and test number? As it's not mentioned in the paper. And why is the scale log in matplotlib?
for this you should check out calculating BER for simple modulation scheme over AWGN like bpsk.
@immortal3 Does set_random_seed(3) ensure that during training,same noise is added to the output of the encoder or is it different?