Closed JaCoderX closed 5 years ago
I'm in the process of understanding the model performance bottlenecks. so I have added some functionally to record run time statistics for the graph.
base on the tf docs
this solution is only partial for understanding the performance of the model. As it doesn't records the statistics of the entire net.
A better solution would be to place the start point of the statistic gathering at the entry point of the model. so all the model statistics are properly gathered
LGTM!
@Kismuz I think I found a serious flew in my pull request, so I think it is best to revert the commit.
while working on a model I notice that this solution is really memory hungry. memory seem to grow linearly with the runtime. plus the tensorboard files get really massive.
just for the protocol, commit was missing this in line 1414:
self.process_summary(sess, data, model_summary, run_metadata=run_metadata)
solution can be optimize to reduce memory usage but I think it just better to revert it
done
collect statistics about computation time and memory usage and present them in tensorboard graph