melodyguan / enas

TensorFlow Code for paper "Efficient Neural Architecture Search via Parameter Sharing"
https://arxiv.org/abs/1802.03268
Apache License 2.0
1.58k stars 390 forks source link

Longer training time for each batch after some steps #91

Open SiZuo opened 5 years ago

SiZuo commented 5 years ago

Hi, I found that the training time of each step is getting slower during the training phase. It might because there are some new operations added to the graph after sess.run().

I am thinking to use some command to fix the graph like: tf.reset_default_graph() sess.get_default_graph.finalize()

But my question is that the network structure is changing after searching a new architecture by the controller, so will the command above be a problem?