rail-berkeley / rlkit

Collection of reinforcement learning algorithms
MIT License
2.52k stars 553 forks source link

self.custom_goal_sample is None #84

Closed Jingjinganhao closed 4 years ago

Jingjinganhao commented 5 years ago

I set power = 0 and run sawyer_pickup.py. When I run python scripts/run_goal_conditioned_policy.py /params.pkl ,it shows return self.custom_goal_sample(batch_size) TypeError:'NoneType' object is not callable How can I fix it? Thanks!

vitchyr commented 5 years ago

Hmmm, can you see what happens if you don't use the custom goal sampler? Also, it'd be helpful if posted the more complete stack-trace.

Jingjinganhao commented 5 years ago

Thank you for your reply! What can I do to not use the custom goal sampler? Should I change exploration_goal_sampling_mode='custom_goal_sampler' in sawyer_pickup.py ? How can I change it? Can you say more details Thanks!

vitchyr commented 4 years ago

Not sure if this is still relevant, but yes, you can set the goal sampling mode to something like, such as:

env.goal_sampling_mode = 'env'