LambdaConglomerate / x9115lam

2 stars 0 forks source link

How long do we run models? #25

Closed meneal closed 8 years ago

meneal commented 8 years ago

So up until this point we've been running models for some random amount of time since we've moved away from using energy as a metric. I've used all kinds of values in testing. 10 retries with 5000 iterations being the most recent test runner.

The best thing would be to determine when to actually stop or how long to run. I'm really not sure how to approach this yet.

One potential way to deal with this is to implement the convergence and then come up with an epsilon value that is close enough to the true pf and if we never actually converge, then just end after some specific number of iterations/retries. The nice thing about setting up this way is that you can just run the rig and have one less tuning factor to deal with at least for cases where we can converge in a reasonable amount of time.

Any ideas on this?

meneal commented 8 years ago

So this is back to being relevant again. I just wanted to see if we came close to anything resembling the frontier using long run cycles. So I ran DTLZ for 20,000 iterations. This is the resulting front:

22.3741600776 33.4268146421 0.516204009374 5.32846111551 3.47519865818 13.6827228583 13.3005774821 56.0246160188 0.350991488779 41.1366743709 10.0124224305 5.83740670029

Bad thing right off the bad is that we're getting a good number of duplicates, especially when we run for that long. Also even with 20K iterations we still don't hit the frontier. These are closer though than what we have seen up to this point. I still feel like it's just a shot in the dark on how long to run. Especially with these more complex models.

meneal commented 8 years ago

I'm really not totally sure why we're ending up with duplicates anyway yet.

meneal commented 8 years ago

Four member front for zdt1 for 20k iterations:

0.577260542544 1.93248024777 0.00604645791752 3.47489821472 0.0524067829959 2.97549525627 0.216016423846 2.5894670637