Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
16.93k stars 4.14k forks source link

Visual Observation and CNN #2307

Closed NonoLG closed 5 years ago

NonoLG commented 5 years ago

Hi all,

I am currently working on project using Visual Information as observation for my agent. I would like to get more details on how does it process the visual information and interpret this to optimize my settings and have a better understanding of the way it works. Anyone has information or ideas of scripts I could use to visualize CNN outputs used in ml-agents please ?

Do you think we could custom the models.py script to show the Convolutional Layers output when training ?

Thanks

ervteng commented 5 years ago

Yes, absolutely. You can definitely add a CNN output layer to the inference_dict in policy.py and save it out to numpy or use opencv to view it live. Tensorflow also offers a general solution to visualizing Tensors using Tensorboard: https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/debugger/README.md

NonoLG commented 5 years ago

Thanks for the info. For guys as me who do not have that much knowledge of TensorFlow and Tensorboard, do you any example or some code I could use please ?

There many code files, I do not find where modify things.

Junggy commented 5 years ago

@NonoLG It's absolutely possible. (I've been modifying python API twice for my projects)

So, this is way how I got my information from python API side. But be careful. Its very complicated. I'd recommend you to backup your anaconda ml-agent module before making any modification.

Here, I'd assume you are training via PPO / without curiosity

  1. Go to model.py at ./mlagents/trainers/model.py
  2. choose your prefered output and declare your output as class attribute (i.e. self.out_1 = your_output). Since you are using visual observation, you might want to go to class method definition for create_visual_observation_encoder.
  3. Go to policy.py at ./mlagents/trainers/ppo/policy.py There, around line 46, you will find inference_dict. Add your class attribute ( from 2 -i.e. self.out_1) to inference dict, as self.model.attribute_name ( i.e. self.model.out_1)
           self.inference_dict = {'action': self.model.output, 'log_probs': self.model.all_log_probs,
                               'value': self.model.value, 'entropy': self.model.entropy,
                               'learning_rate': self.model.learning_rate,
                               'my_out_1' : self.model.out_1,
                               'my_out_2' : self.model.out_2}
  4. Go to policy.py at ./mlagents/trainers/policy.py find class method definition of _excute_model() You will see run_out dict variable. read what you want to read. (i.e. out_1_np = run_out['my_out_1']) it would be in shape of (Batch,Height,Width,Channel) if it is image. You can either plot using matplotlib or save as image or any type you want. But I'd recommend you to declare some counter to plot / save image every few steps and then after few steps make it stop. Because, if it plots, unity will stop until you close the plot. Also if you save image, steps are so fast. you will have to spend huge amount of disk storage if you don't stop saving at certain point.
NonoLG commented 5 years ago

Thanks a lot for your answer. Just, what do you mean by "your_output" please ?

Junggy commented 5 years ago

@NonoLG That means any of ouput you want to see like output from certain cnn later. like here you can pick one of output (i.e. after first cnnl layer) output = tf.conv2d(input, ...) self.out = output output = tf.conv2d(output, ...) output = tf.conv2d(output, ...)

NonoLG commented 5 years ago

Thanks for your help, but I noob level doesn't enable me to find the way to do it —'

If somebody tried it on his side and would be ready to share it, would be incredible :D

xiaomaogy commented 5 years ago

Thank you for the discussion. We are closing this issue due to inactivity. Feel free to reopen it if you’d like to continue the discussion though.

github-actions[bot] commented 3 years ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.