microprediction / timemachines

Predict time-series with one line of code.
https://www.microprediction.com/blog/popular-timeseries-packages
MIT License
398 stars 51 forks source link

couple minor things I noticed #5

Closed druce closed 3 years ago

druce commented 3 years ago

I've been using hyperopt and optuna for a while, very curious to see if any of the other optimizers do better, although honestly my use cases may be pretty simple and I suspect not that much difference.

would add requirements.txt or conda environment.yml to make it easier to set up, see below

in timemachines/optimizers/alloptimizers.py I see https://github.com/microprediction/timemachines/blob/main/timemachines/optimizers/alloptimizers.py#L40

print(optimizer.__name__,(optimizer.__name__,optimizer(objective, n_trials=50, n_dim=5, with_count=True)))

I think this should use the loop parameters per below?

print(optimizer.__name__,(optimizer.__name__,optimizer(objective, n_trials=n_trials, n_dim=n_dim, with_count=True)))

alloptimizers.py seems to run single_threaded, would consider pool.map to run many optimizations concurrently (but I'm not sure best way to do that). if that makes sense let me know and I can take a crack at a pull request. https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.map

this is requirements.txt I have, can do a pull request, lmk, might need testing/adjusting,

# may want to install fbprophet via conda, needs compiler, pystan
ax-platform
deap
divinity
fbprophet
funcy
hyperopt
microconventions
momentum
nevergrad
numpy==1.19.5
optuna
platypus-opt
poap
pydlm
pymoo
pystan
pySOT
swarmlib
microprediction commented 3 years ago

Good points. I fixed the minor one (loop vars)

The other needs a cleanup, I agree. Right now I"m deliberately restricting to one thread to try to form an opinion about relative performance of the algorithms, but that should be clearer in the doc and perhaps there should be a flag for that.

Another fix would be to implement thread-safe counting of the function evals.

microprediction commented 3 years ago

I sorry I misread. Yes running one optim in each thread would make a lot of sense.

As for requirements, yes please go ahead. I don't use conda so you're better placed.

microprediction commented 3 years ago

You will be pleased (I think) to learn that I have separated out all the optim stuff into https://github.com/microprediction/humpday check it out!

druce commented 3 years ago

very cool!

could add scipy L-BFGS-B if you were inclined, quasi-Newton method that worked better for me than SLQSP which I think is similar , can do a pull request or below worked for me

-MINIMIZER_KWARGS = {'slqsp': {'method': 'SLQSP'}, +MINIMIZER_KWARGS = {'lbfgsb': {'method': 'L-BFGS-B'},

-SCIPY_OPTIMIZERS = [ scipy_slqsp_cube, scipy_powell_cube, scipy_nelder_cube, scipy_dogleg_cube ] +def scipy_lbfgsb_cube(objective, n_trials, n_dim, with_count=False):

[image: Screen Shot 2021-02-13 at 11.24.36 AM.png]

On Sat, Feb 13, 2021 at 12:45 AM Peter Cotton notifications@github.com wrote:

You will be pleased (I think) to learn that I have separated out all the optim stuff into https://github.com/microprediction/humpday check it out!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microprediction/timemachines/issues/5#issuecomment-778568007, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJDX3SWCIKF4NQ7RCYNQ6TS6YGYVANCNFSM4XFTMYJQ .

microprediction commented 3 years ago

Great idea

microprediction commented 3 years ago

Druce, Would you mind doing the PR to humpday, not timemachines? Peter