Closed partofthething closed 9 years ago
If you run with N=1000 instead of 100, things look much better. What's the problem?
Here's a hint at the problem: the supersmoother is giving much choppier results than the SUPSMU function. Using f2py, I've done the direct comparison below (from the sample problem in the supersmoother publication). My guess is that if I smooth this out, the ACE results will start looking good.
Fixed the SuperSmoother with d8648afcabae4fb12c4fe7608a12fbbc8d5300d6. There was a "secret" TWEETER smooth that was supposed to happen after interpolating from the primary smooths. Also, the primary smooths had a minor asymmetric window issue that is now fixed. All smoothers are reproducing FORTRAN versions perfectly now. However...ACE is still not behaving as well as we'd like.
OK I fixed the issue where I wasn't keeping the means = 0 and the stddev of theta = 1. It's performing much better now. It's not as perfect as the example in Wang, but the results I'm getting from the sample problem in the original ACE publication are pretty consistent. I think it's working as expected now.
The excellent test problem in [Wang, "Estimating Optimal Transformations for Multiple Regression Using the ACE Algorithm"] is acting up a little. On one hand, the basic shapes of the various components are being recovered. On the other hand, they are very noisy and not nearly as good as in the paper. This suggests something is slightly wrong in ACE still.