google-deepmind / bsuite

bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent
Apache License 2.0
1.5k stars 181 forks source link

DQN mnist & mountain car performance #20

Open pluebcke opened 4 years ago

pluebcke commented 4 years ago

Hi,

while working on a PyTorch DQN agent for BSuite experiments, I noticed quite bad results on the mnist and mountain car experiments. I see that a similar question was addressed here, but the thread was closed.

To further investigate, I created a new conda environment, downloaded and installed a fresh copy of BSuite and ran the DQN agent from the baselines. The only settings I've changed were "bsuite_id" to "SWEEP" and the save path.

When you compare the results from both agents with the barplot on page 16 of the BSuite manuscript, you notice that both agents have worse performance on mnist and mountaincar and better performance on catch. barplot

Were there any changes on the environments that I missed? The DQN agent from the manuscript did use the default parameters from the baseline directory, correct?

Thanks, Peter

iosband commented 4 years ago

Hi Peter!

Thanks for raising this... I think we might have seen some slippage in agent performance.

I'm not sure if this has come from updates to:

My suspicion is there are some small details in this migration TF1->TF2 that changed some scores (the agents aren't exactly the same). We will look into this and then re-run baselines with updated numbers.

Many thanks, Ian

iosband commented 4 years ago

Hello again!

I have just run the agents checked in at HEAD and I did not see your observed scoring...

image

We may need to tool in some more continuous testing, but the scores on mnist in particular seem "off" for the DQN implementation.

Can you confirm this is still an issue for you? Is this poor performance for your implementation of DQN, or the baseline implementation we provide?

mklissa commented 4 years ago

I also have a similar observation concerning MountainCar, however it is related to the AC algorithm. To me it seems like there is a major difference between the results reported in the paper (close to 1) versus the ones in this thread (close to 0). I have also tried running actor_critic_rnn on mountain_car and it does not seem to learn (on default hyper params).

iosband commented 4 years ago

Yes @mklissa - I see that difference above.

There have been several moving pieces:

However, I think the best approach is to go from what is at HEAD and start a new issue to update paper/reference colabs to incorporate this bug fix.

pluebcke commented 4 years ago

Dear Ian,

Thanks for looking into this! Back in march, I observed poor performance on mnist with both the baseline implementation as well as my own implementation of DQN.

Given that mnist seems to work perfectly fine for you, I assume there must be some problem on my side. I will set up a system from scratch and run the baseline implementation of dqn again. It might take a while though until I find time to do that.

Best regards,

Peter

pluebcke commented 3 years ago

Hi, I finally found some time to look into the issue. On my laptop, the performance of the baselines tensorflow DQN agent is still quite bad (with a score somewhere around 0.25 in the bar plots above).

I used a fresh install of pop!_os 20.04 (distribution based on Ubuntu) and then performed as few steps as possible to run the agent:

  1. Installed Anaconda
  2. Created a Conda environment with Python 3.7
  3. Steps from the bsuite github page:
    • pip install --upgrade pip setuptools
    • pip install bsuite
    • pip install bsuite[baselines]
  4. Opened bsuite/bsuite/baselines/tf/dqn/run.py, changed the bsuite_id to 'SWEEP' and set the 'verbose' flag to False
  5. python run.py

Hope this helps.

Best regards, Peter

iosband commented 3 years ago

Ah... OK well I think in order to get the claimed performance, you need to run the dqn.default_agent()

I can see that this is a bit confusing, but we wanted to expose the flags as an easy way for people to tinker! If you instead fo to baselines/tf/run.py then you should be able to get the same behaviour... is that right?

BTW... do you think we should instead remove the flag options and avoid this kind of confusion?

pluebcke commented 3 years ago

I would keep the flag options but maybe have the same values as the default agent as default.

As you suggested, I replaced the agent (that uses the flags) with the dqn.default_agent() in run.py and ran the experiments again. Unfortunately no improvement on the mnist experiments.

@mklissa you said you observed something similar on MountainCar. Did the mnist experiments work for you? If I'm the only one experiencing this problem, then there might just be some issue on my side.

Best regards, Peter

jbarsce commented 3 years ago

Hey there, I'm writing to report that I'm also experiencing the same problem as @pluebcke in MNIST. I couldn't replicate the good MNIST results as reported in the paper. I also noticed a bad performance (0.2-0.26 score at most) using PPO and DQN agents from an external library (stable-baselines), tried different hyper-parameters, number of layers/neurons, activation functions, with no effect. I also checked the MNIST env implementation offered here and seemed OK to me.

Today I created a new virtual env with the latest BSuite version with the baselines, runned the 20 seeds twice and the baseline DQN agent also scored 0.23. This also happens with noise and scale variants.

iosband commented 3 years ago

Hi @jbarsce - I'm not sure I understand the question.

So, are you saying that: (a) The TF DQN checked into bsuite.baselines is not solving the bandit task for you? (b) Another agent is unable to solve the MNIST task?

We have some tools for testing this internally within Google/DeepMind... and based on that I'm confident that the bsuite/baselines/jax/dqn and bsuite/baselines/tf/dqn do reproduce the performance.

However... we clearly need to work out a way to share these tests/reproducibility/installation instructions so that this confusion does not arise.

iosband commented 3 years ago

image

For reference, here is a record of the nightly runs for the TF baselines. You can see that some of the experiments are a little noisy... but that the DQN TF is consistently reproducing the MNIST results abov.

jbarsce commented 3 years ago

Hi Ian, thanks for the quick reply! yes, I ran the BSuite experiments with another DQN agent and noticed that, while the other envs performed similar than in the accompanying paper, MNIST was the only that underperformed.

As this external agent had several variations, I tried to replicate the results with the DQN agent from this repo, trying tf and jax and isolating them in a new virtual environment. In case they are of any help, the following were the steps I followed (I took them from the jax repo and from here)

  1. Created a conda environment with python==3.6
  2. pip install --upgrade pip setuptools
  3. pip install bsuite[baselines]

For tensorflow 2.1

I ran the experiments with

  1. python bsuite/bsuite/baselines/tf/dqn/run.py --bsuite_id=MNIST

For jax

  1. pip install git+https://github.com/deepmind/dm-haiku
  2. pip install --upgrade jax jaxlib
  3. pip install git+git://github.com/deepmind/optax.git
  4. pip install git+git://github.com/deepmind/rlax.git
  5. python bsuite/bsuite/baselines/jax/dqn/run.py --bsuite_id=MNIST

Environment: Ubuntu 18.04 bionic

Please let me know if you need any other information. Finally, thanks for this great repository Juan

pluebcke commented 3 years ago

Just a wild guess, maybe something went wrong with the download of the input mnist dataset for Juan and me?

iosband commented 3 years ago

Yes interesting... something is getting lost between the version that is checked in to Google3 and the settings you are running.

@yotam @aslanides and I will have a look into this...

Going to keep this open for now and try to reproduce this...

braham-snyder commented 2 years ago

Repro in ~10 lines (excluding imports): https://colab.research.google.com/drive/1XtTv-p2bXfvMBT_77cWjWRHPXIvimWlO?usp=sharing

iliasdf commented 3 months ago

Hi, Recently, I also started working with Bsuite, which I find a lovely environment to work with. I noticed as well that the results of the MNIST experiment are off. Apparently, you have discussed this issue a while back already. Did you find a solution in the meantime already? Thanks, Ilias