sentenai / reinforce

Reinforcement learning in haskell
https://sentenai.github.io/reinforce/
BSD 3-Clause "New" or "Revised" License
44 stars 17 forks source link

break the dependency on the gym-http-api server -- call the gym directly #6

Open stites opened 7 years ago

stites commented 7 years ago

one option might be to call the gym directly with call-python-via-msgpack: https://github.com/nh2/call-python-via-msgpack

this would speed up performance considerably and would make me feel more comfortable about uploading to hackage.

stites commented 6 years ago

This can be condensed down to a simple Main.hs example file off the top of my head (quasiquotes or strings both would be fine):


main :: IO ()
main = do
  startpython                          -- maybe we need some optional kick-off process
  callpython    [| import gym |]       -- can make effectful imports
  callpython    [| counter = [0,1] |]  -- can initialize statically
  callpython    [| counter |]          -- callpython can be stupid
  c <- returnpy [| counter |]          -- some kind of way to "get stuff out"
  print c                              -- prints '[0,1]', stringly-typed is fine
  let a = show 1                       -- can take a haskell variable...
  callpython    [| counter[1] += a |]  -- ...and do something with it
  c <- returnpy [| counter |]          -- and changes can still be extracted
  print c                              -- prints '(0,2)', our final output

I'm showing quasiquotes because dealing with template haskell won't be so bad since we can section this off into a reinforce-environments-gym package -- also, i've heard that there is a library that can do python interop with QQs. Just using strings (or something smarter) and dodging template haskell compile issues would be nice, too : P

stites commented 6 years ago

small update on this -- turn out that the gym itself isn't compatible with cython, so I am guessing we'll be stuck with this dependency for longer than expected. I've split out the gym code into the reinforce-environments-gym in the meantime. I opened up #20, which has to be done anyhow, and I think it might be more prudent to wait on this submodule.

KiaraGrouwstra commented 6 years ago

fwiw I looked into openai/retro for the contest -- I managed to make that go over the gym-http-api wire, but Python JSON serialization of emulator observations obliterated performance. Looking into how openai/retro was implemented, it looks getting that into Haskell seems a matter of porting over one python file or so.

stites commented 6 years ago

Awesome! Yeah, every now and then I start porting over a environment from the toy problems and classic envs. Ideally reinforce natively implements its own emulators and drops the gym-http-api dependency entirely (which seems like a really flawed way to manage language bindings).

KiaraGrouwstra commented 6 years ago

Yeah, that'd be great. What alternative would you have suggested over an HTTP API? Cuz porting directly would imply effort for each env/language combination. :/

stites commented 6 years ago

Porting directly used to be the plan-of-action. I was also thinking that haskell could just bypass the gym and call https://github.com/mgbellemare/Arcade-Learning-Environment directly.

stites commented 6 years ago

Basically any C++ gym alternatives are fair game, so long as they extern to C.

KiaraGrouwstra commented 6 years ago

Honestly, with Retro I think they did a great job in terms of making the Python little but a thin wrapper over the C++. But yeah, C++ envs consumed by FFI does sound like an interesting compromise.

That said, it's probably mostly that using other languages than Python is considered unusual in ML now.

So... either Haskell isn't justified here (matching your debugging issues), or it's better and we're gonna have to convince more people of that.

But yeah, considering the likes of TF/PyTorch all prioritize Python...