1)ParallelExeccutor is deprecated.Please use CompiledProgram and Executor.CompiledProgram is a central place for optimization and Executor is the unified executor.Example can be found in conpiler.py.
2)[3548 graph.h:204]WARN:After a series of passes,the current graph can be quite different from OriginProgram.So,pleasse avoid using the ' OriginProgram () 'method!
3)You can try our memory optimize feature to save your memory usage:...
4)The number of graph should be only one,but the current graph has 8 sub_graphs.If you want to see the nodes of the sub_graphs,you should use 'FLAGS_print_sub_graph_dir' to specify the output dir. NOTES : if you not do training,please don't pass loss_var_name.
5)Traceback (most recent call last):
File "run.py",line 645, in
train(logger, args)
File "run.py", line 464, in train
args)
File "run.py", line 308, in validation
ave_loss = 1.0 * total_loss / count
ZeroDivisionError: float division by zero
1)ParallelExeccutor is deprecated.Please use CompiledProgram and Executor.CompiledProgram is a central place for optimization and Executor is the unified executor.Example can be found in conpiler.py.
2)[3548 graph.h:204]WARN:After a series of passes,the current graph can be quite different from OriginProgram.So,pleasse avoid using the ' OriginProgram () 'method!
3)You can try our memory optimize feature to save your memory usage:...
4)The number of graph should be only one,but the current graph has 8 sub_graphs.If you want to see the nodes of the sub_graphs,you should use 'FLAGS_print_sub_graph_dir' to specify the output dir. NOTES : if you not do training,please don't pass loss_var_name.
5)Traceback (most recent call last): File "run.py",line 645, in
train(logger, args)
File "run.py", line 464, in train
args)
File "run.py", line 308, in validation
ave_loss = 1.0 * total_loss / count
ZeroDivisionError: float division by zero