rstudio / tensorflow

TensorFlow for R
https://tensorflow.rstudio.com
Apache License 2.0
1.33k stars 318 forks source link

benchmark of speed against python api #251

Closed smilesun closed 6 years ago

smilesun commented 6 years ago

Is there any benchmark study to evaluate the speed behavior compared to the python api?

jjallaire commented 6 years ago

It should be approximately the same (since the R API is a very thin layer over the Python API). When using Keras, we re-order the input data to be row rather than column-based (as that yields better training performance). The overhead of this copy might or might not be significant depending on how CPU intensive your training is (as training gets more expensive the copy basically just washes out).

smilesun commented 6 years ago

what happens if I use the R API to do reinforcement learning? In which case there is only a small batch training with epoch 1 or more, and the overhead of transition from R to python accumulates over time? Is there a better solution for this case?

jjallaire commented 6 years ago

The overhead for a call to Python should be on the order of 1 or 2ms. Are you making tens of thousands of calls?

smilesun commented 6 years ago

usually in reinforcement learning, one make not only tens of thousands of calls but in millions. So is there any way to avoid the overhead?

eddelbuettel commented 6 years ago

@smilesun What @jjallaire is trying to say that you may have one call from R down to Python and C++ where your bazillions calls with RL are all independent of how you called it. Reticulate offers a very thin shim on top of the existing frameworks. In general, these do not call back to R so you may be concerned about a a non-problem.

eddelbuettel commented 6 years ago

In a mock diagram:


shell layer  --> calling  R layer -->  calling Python layer -->  calling C++ code
                                                                 millions of calls here
no cost over here       or here
jjallaire commented 6 years ago

If you are going to make millions of calls from R to Python then that ~ 1ms overhead is obviously going to add up. There is no magical way for us to make this go away!

eddelbuettel commented 6 years ago

Yep. So you want (repeated) function calls as far down the stack as you can -- just like loops.

jjallaire commented 6 years ago

You might want to post same sample code here. One thing I am suspicious of is the need to make millions of calls in the first place, as the design of TF is create an execution graph that runs entirely within the TF C++ runtime (not even Python code executes in the graph). Python is generally simply an authoring tool for these graphs and R is the same (thus the performance difference is negligible since TF programs typically don't execute much if any Python code either, it's just used for authoring).

Again, I might be missing something by not fully understanding your use case. Sample code would help us give you a better answer.

smilesun commented 6 years ago

I implemented a reinforcement learing package and this algorithm is slow, have not figured out why, all the rest is fast. https://github.com/smilesun/rlR/blob/master/R/agent_pg_ddpg.R

the example to run is

  library(rlR)
  env = makeGymEnv("Pendulum-v0")
  agent = makeAgent("AgentDDPG", env, getDefaultConf("AgentDQN"))
  agent$learn(1)
smilesun commented 6 years ago

the paper is Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., … Wierstra, D. (2016). Continuous control with deep reinforcement learning. In ICLR. https://doi.org/10.1561/2200000006

smilesun commented 6 years ago

so the question is everytime one call sess.run, there is no overhead anymore?