rll / rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.
Other
2.91k stars 801 forks source link

Stuck while training at 977 itr #258

Open zusne opened 2 years ago

zusne commented 2 years ago

I choose the point_gather, and use rllab to train 5000 itr, but it always get stuck when it come to 977 itr. I try to train 8 programs with different parameters in parallel, but all get stuck at 977 itr, and i turn to train only one program, get the same result. I think thers is something wrong with my computer, so i change a computer to train the same program 5000 itr, but also get the same result: get stuck while training at 977. I am so comfused about it. All the progeams will stuck like this:


0% [# ] 100% ETA: 00:00:10% [## ] 100% ETA: 00:00:10% [### ] 100% ETA: 00:00:10% [#### ] 100% ETA: 00:00:10% [##### ] 100% ETA: 00:00:00% [###### ] 100% ETA: 00:00:00% [####### ] 100% ETA: 00:00:00% [######## ] 100% ETA: 00:00:00% [######### ] 100% ETA: 00:00:00% [########## ] 100% ETA: 00:00:00% [########### ] 100% ETA: 00:00:00% [############ ] 100% ETA: 00:00:00% [############# ] 100% ETA: 00:00:00% [############## ] 100% ETA: 00:00:00% [############### ] 100% ETA: 00:00:00% [################ ] 100% ETA: 00:00:00% [################# ] 100% ETA: 00:00:00% [################## ] 100% ETA: 00:00:00% [################### ] 100% ETA: 00:00:00% [#################### ] 100% ETA: 00:00:00% [##################### ] 100% ETA: 00:00:00% [###################### ] 100% ETA: 00:00:00% [####################### ] 100% ETA: 00:00:00% [######################## ] 100% ETA: 00:00:00% [######################### ] 100% ETA: 00:00:00% [########################## ] 100% ETA: 00:00:00% [########################### ] 100% ETA: 00:00:00% [############################ ] 100% ETA: 00:00:00% [############################# ] 100% ETA: 00:00:00% [##############################] 100% ETA: 00:00:00 Total time elapsed: 00:00:12

all thing seem to stop, but no error report. I wonder why, or is there some way to load the param about the trained model before?