Open ghost opened 6 years ago
When training large dataset,it is very slow,and I found that the default batch size is 1,does it support other batch size.
我之前就遇到过global变量用在多进程使用的时候,经常出现问题,然后我跑了一下FISM,global变量也出现了问题,pool.map(get_train_batch, range(_num_batch),就是在get_train_batch函数中,全局变量是None,请问您跑的时候有遇到过这个问题吗
When training large dataset,it is very slow,and I found that the default batch size is 1,does it support other batch size.