Open jkomoros opened 8 years ago
Likely because the invert() curve is tuned to values that are too low, now that almost all of the twiddlers are taking higher values.
The invert curve is very specifically tuned to what the expected range of goodness values is.
As we're working on #276 , the expected outcome is way lower, so the spread is lower.
Ideally we would calculate this attentuation curve once by going through each twiddler, passing in a f value of 1.0 (for ones that have a weight that's positive) and then adding all of those up. (We'd have to special case humanLikelihood to return the highest technique it knows of... which would be guess. Hmm.)
Another approach to training it is to have a gogenerate-style thing that keeps track of the max and median goodness over repeated runs and then sets the value based on that.
Even in cases where the goodness should be basically similar, they're way too far out in front.