Closed rajatpaliwal closed 5 years ago
Hi @rajatpaliwal, I'm looking into the best way to do this; it looks like we don't have an option for it currently, but you can add
self.summary_writer.add_graph(self.policy.graph, step)
to Trainer.write_summary(), and a visualization of the graph will be available in Tensorboard (see https://www.tensorflow.org/tensorboard/r1/graphs).
I don't think allowing editing of the graph is something that we plan to support.
Hi @chriselion , Just confirming, so I should add this line in trainer.py inside def write_summary?
I added the line "self.summary_writer.add_graph(self.policy.graph, step)" in trainer.write_summary(). But in the tensorboard i am seeing no graph definitions files were found.Also, receiving the message while training:
INFO:mlagents.trainers:Cannot write text summary for Tensorboard. Tensorflow version must be r1.2 or above.
My tensorflow version is 1.7.1
Not sure about that warning. I think adding
self.summary_writer.add_graph(self.policy.graph)
in Trainer.save_model() might be a better place instead; that gets called if you stop training early.
Thanks, that worked. Able to visualize the graph now. Do you think tensorflow graph transform tools can help in editing this neural net architecture.
I'm not familiar with the tensorflow graph transform tools, and it's not something that we can provide any support for.
Backing up a bit, what are you trying to accomplish by editing the network?
Its just that in order to achieve better training results I believe apart from setting hyperparameters in config files, configuring the architecture of the neural network by deciding upon the loss function being used, dropout techniques etc. can help in achieving better results.
If you want more control over the network, I'd recommend modifying the code that we use to construct the graph instead of trying to modify the graph itself. For example if you wanted to add dropout to the convolutional layers, you could start from one of the options here https://github.com/Unity-Technologies/ml-agents/blob/339781594317d48a87ff75b5ef1c77b979ebc420/ml-agents/mlagents/trainers/models.py#L474-L505
Hi @chriselion , Thanks for guiding me to this file. This was really helpful. I believe I have quite a lot of insight into the neural net architecture we are using and the way to configure it according to me.
Hi @rajatpaliwal, I logged a feature request in our internal tracker to automatically save the graph for Tensorboard. Closing this issue for now, but please reopen (or make a new one) if you have more problems.
Hi @chriselion , I have another small query which is how can I add a histogram depicting the weights of hidden layers after the training is over. Any suggestion would be helpful.
In theory you should be able to use tf.summary.histogram
for the layers you care about. However, I tried this out locally and didn't get any results. I'll ask some other tensorboard experts here and see if they have any suggestions
Thanks @chriselion for the suggestion. Will be waiting for anymore suggestions from your side.
I asked around here but nobody else has used the histograms either.
If you'd like to experiment with it and submit a pull request, we'd be happy to take the change, but otherwise I don't think it's something that we're likely to add.
Not sure about that warning. I think adding
self.summary_writer.add_graph(self.policy.graph)
in Trainer.save_model() might be a better place instead; that gets called if you stop training early.
Any updates doing this with ml agents 0.14?
Is self.summary_writer.add_graph(self.policy.graph) still not present in the latest release?
Not sure about that warning. I think adding
self.summary_writer.add_graph(self.policy.graph)
in Trainer.save_model() might be a better place instead; that gets called if you stop training early.
I Tried doing this, however I get an error saying that PPOTrainer does not have an attribute "summary_writer". I also tried with "summary_writers" but still no luck...anyone else knows how to do this in MlAgents 1.0 Release 3? Thanks!
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Is there any way to look into the architecture of the neural net being trained and then reconfiguring the parameters according to our requirement in order to achieve better results. I know that frozen_graph_def.pb contain the information regarding the latest trained neural net, but I am unable to read it. Any suggestion would be helpful.