Learning the computed λ, by using expectation is extremely easy for a neural network. Based on #7, we hope that the DNN does better than this. So force it to do better, we give it the computed λ as input, and only ask it to learn the difference between the real λ (as used to synthesize the data) and the estimator λ. Our question is: can we help the neural network to learn the data if we give it mathematical hints on it?
To do so, define ζ to be the estimator λ, as computed by the expectation;
ζ = Σ_{i=0,...,K} (k * H[k]) / M.
Define a new histogram Z, computed from H, using ζ . The new histogram will show the difference between the expected value and the real value, i.e, run the following loop for k=0,.., K
Z[k] := H[k] - M * Poisson(ζ,k).
Now apply DNN. The new data includes all the data the previous DNN achieved, but in a slightly more convenient way. Hopefully, the results are better.
Design and implement and experiment to give a conclusive answer to this question.
Learning the computed λ, by using expectation is extremely easy for a neural network. Based on #7, we hope that the DNN does better than this. So force it to do better, we give it the computed λ as input, and only ask it to learn the difference between the real λ (as used to synthesize the data) and the estimator λ. Our question is: can we help the neural network to learn the data if we give it mathematical hints on it?
To do so, define ζ to be the estimator λ, as computed by the expectation;
Define a new histogram Z, computed from H, using ζ . The new histogram will show the difference between the expected value and the real value, i.e, run the following loop for k=0,.., K
Now apply DNN. The new data includes all the data the previous DNN achieved, but in a slightly more convenient way. Hopefully, the results are better.
Design and implement and experiment to give a conclusive answer to this question.