openai / baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
MIT License
15.73k stars 4.87k forks source link

Improving code quality #400

Open david-woelfle opened 6 years ago

david-woelfle commented 6 years ago

Hi everybody,

first many thanks for putting this repo online. As I found it I thought something like: "High quality implementations of RL algorithms? That is pretty cool." However, after having a more detailed look into the repository I was actually very disappointed that what I found does not appear to be anything close to high quality code.

Here some examples what I mean:

Having a look at baselines/baselines/a2c/a2c.py there are nearly no comments explaining what is done and why. I get that large parts of the code can be understood by reading the corresponding paper and that is fair enough for me. Nevertheless, not writing any docstrings at all seems very poor to me, See e.g. lines 18-31 of a2c.py:

def __init__(self, policy, ob_space, ac_space, nenvs, nsteps,
            ent_coef=0.01, vf_coef=0.5, max_grad_norm=0.5, lr=7e-4,
            alpha=0.99, epsilon=1e-5, total_timesteps=int(80e6), lrschedule='linear'):

        sess = tf_util.make_session()
        nbatch = nenvs*nsteps

        A = tf.placeholder(tf.int32, [nbatch])
        ADV = tf.placeholder(tf.float32, [nbatch])
        R = tf.placeholder(tf.float32, [nbatch])
        LR = tf.placeholder(tf.float32, [])

        step_model = policy(sess, ob_space, ac_space, nenvs, 1, reuse=False)
        train_model = policy(sess, ob_space, ac_space, nenvs*nsteps, nsteps, reuse=True)

While reading this code I ask myself many questions, e.g:

Going on, some more points I would expect from high quality code:

That's a lot of criticism I know. So why should you care at all? Well the idea of this repo is actually brillant and I am sure it works fine as long as I don't have to touch anything. But as soon as I want to change a thing I would rather throw you code away and write it new myself, because it's less effort then understanding poorly documented code.

Finally, to leave you with a question, is there any chance that the above points are being improved in future?

Kind regards,

David

pzhokhov commented 6 years ago

Hi David! Thank you for pointing that out! All your concerns are perfectly valid; in fact, we have tripped over many of these problems internally as well. Recently, we started an effort on code quality improvement of baselines, specifically, test and benchmark coverage, pep8 (albeit with a bit relaxed standards, at least at first, around line length, star imports and maybe variable names because researchers love to name variables "Q" or "Vf" :) ). Code comments and docs will follow, with a bit of a fuzzy timeline though. If you have any particular algorithm you'd like to receive high-priority love, please let us know, also, we are open to pull requests with either new algorithms or improvements to the existing ones.

david-woelfle commented 6 years ago

Hi @pzhokhov! Thank you for your quick reply and good to know you are working on this. I think the concept of this repository is so great it's worth spending the effort on it. I will see if I can spend some time within my next projects to add some documentation too. Do you have any docstring convention (aka layout) chosen yet?

One more thing, is there a common API layout? Like e.g. in scikit-learn every classifier/regressor provides a fit and transform method.

jacky-liang commented 6 years ago

Adding onto this - the Gym library provides a nice and clean interface for interacting with RL environments. It'd be nice if the same consistency can be applied to the algorithms and policies in baselines.

For example, it seems like many algorithms implement their own versions of neural nets and policies, when a modular approach could be taken, where the implementation details of policies can be abstracted away from the training algorithms, and vice versa.

Ideally there should just be 1 train.py that takes in a config file which specifies the policy architectures and the algorithm, and loads the corresponding components.

It'd be also nice to create a standard policy saving/loading interface, so it'd be possible to load and collect data from policies separate from the learning algorithms that were used for training.

pzhokhov commented 6 years ago

@david-woelfle API layout - there is no solid common API just yet (that's another part of the code quality effort). So far the most stable part seems that each algorithm provides a "learn" method that takes environment, policy, and algorithm hyperparameters and returns the trained model. Docstrings - I personally like multi-line ones separated by triple single quotes, with the description of parameters and return values for "public" methods, but again, no solid convention across the repo exists at the moment. @jacky-liang both are good points. The saving / loading interface in a bare-bones version is in internal review now, so should be out soon; the modular API for training, algorithms, policies and the environments is something we are also currently working on.

mt- commented 6 years ago

Hi all,

Not to pile on here, but I have a few more points I think are worth highlighting.

While I massively appreciate the code, cleaning things up a bit would awesome.

jperl commented 6 years ago

The stable-baselines fork has done a lot of heavy-lifting to refactor these baselines into a common API and add documentation. Would it be possible to merge their efforts upstream?

Edit: this discussion is happening here :pray:.