Closed nanbaima closed 4 years ago
I already got it, it was my own stupidity...
It is totally my fault, I was trying to get the path of the file with logger.get_snapshot_dir()
, it works when I am doing the training, once the setup_logger
creates it.
I use it to get the path of the training data, so I may fill a .csv on the same place, and later I can plot individually each reward influence on the total reward, for each step.
However, as I want to access the data already trained, it wont work the same for the run_policy, instead now I am using the args.file
provided by myself when I run the script run_policy.py.
Sorry for my mistake!
Hey Vitchyr,
Is there a problem to run the run_policy.py while I'm also training an model with the run_sac.py? Should it be a problem? Because after almost loading and starting the env I'm getting the message:
Traceback (most recent call last): File "run_policy.py", line 44, in <module> simulate_policy(args) File "run_policy.py", line 28, in simulate_policy render=True, File "~/rlkit/rlkit/samplers/rollout_functions.py", line 105, in rollout o = env.reset() File "~/rlkit/rlkit/envs/wrappers.py", line 21, in reset return self._wrapped_env.reset(**kwargs) File "Env.py", line 134, in reset with open(join(self.log_dir,"reward.csv"), "a") as csvfile: File "/usr/lib/python3.6/posixpath.py", line 80, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType QObject::~QObject: Timers cannot be stopped from another thread QMutex: destroying locked mutex