Closed nadimahmedwales closed 3 years ago
Yes, I can reproduce the error just fine. Google CoLab upgraded versions of TensorFlow recently. I will take a look at what needs to be changed soon.
Also, looks like Google changed/broke some things on the base CoLab image that now causes this error also:
EasyProcessError: start error <EasyProcess cmd_param=['Xvfb', '-help'] cmd=['Xvfb', '-help'] oserror=[Errno 2] No such file or directory: 'Xvfb': 'Xvfb' return_code=None stdout="None" stderr="None" timeout_happened=False>
I am looking into both.
So, I know what is going on. Some changes to TF-Agents now cause it to throw an error because it detects that the neural network was created with floating point inputs, yet the Atari environment returns ints (0-255). I can get through the above error with this code
observation_spec = BoundedTensorSpec(
shape = observation_spec.shape,
dtype = np.float32,
name = observation_spec.name,
minimum = observation_spec.minimum,
maximum = observation_spec.maximum)
But, the spec is cached in several locations, so this just causes a different cast error further down. I wish TF-Agents actually included an Atari example. Need to put a bit more thought into how to handle this breaking change.
I did raise an issue in TF-Agents, we will see if they have any guidance. I am sure I can "hack" my way through this, and probably that is what is called for. But I am sure they will only break my "hack" on their next version. I am a bit surprised they do not have a "Hello World" Atari example anymore.
See discussion at the above bug (agents 487), this is a bug in TF-Agents that hopefully they will resolve soon. I will add a note to my notebook.
TF-Agents added an experimental Atari example:
I will see about incorporating this into my example soon.
Link seems to be dead, any updates on this issue ?
Hi, thanks for this lecture, I was watching your video on youtube and it was really helpful. I was following the video to but got stuck on this line of the code:
for filename in tqdm(os.listdir(faces_path)):
I got this error message:
FileNotFoundError Traceback (most recent call last)
TF-Agents seems to have removed all of their Atari examples, and their code has several issues that prevent it from working with the Gym Atari examples. I will very likely move to a better library for reinforcement learning. I am just not having much luck with TF-Agents.
Okay, I believe I fixed it. I checked in a new version, works entirely in CoLab. It is NOT very efficient with training, I need to tune it a bit. Also need some general cleanup. I will leave this issue open while working on that.
I have it working and tuned as best I can. The later versions of TF-Agents do not seem to train as efficiently as before on Atari, which is unfortunate, but I do not believe Atari classes in TF-Agents is really a priority (or even interest) of the TF-Agents team.
Hi, Thank you for the course. I tried to run the Atari example on Google's Colab, however there seems to be an issue. I have restarted the session as mentioned in the lecture video, but I still get an error.
The problem arises with the Agent section of the Jupiter Notebook, the box which starts with defining the optimiser.
optimizer = tf.compat.v1.train.RMSPropOptimizer(
The issue happens with the last part of the box, I have copied and pasted the error below:
ValueError Traceback (most recent call last)