Closed ghost closed 2 years ago
Hi @MuzWong this should make it work
low_bound = np.array([-np.inf]*2*len(self.specs_id)+[-np.inf]*len(self.params_id))
high_bound = np.array([np.inf]*2*len(self.specs_id)+[np.inf]*len(self.params_id))
self.observation_space = spaces.Box(low = low_bound , high=high_bound ,dtype=np.float32)
the error happens because the observation space in the original code isn't correctly specified , note that this doesn't affect the correctness of the environment , it's a workaround
thanks for your answer , I follwed the method mentioned by gulleh,by changing PERF_HIGH to 1 and another line high=np.array([TwoStageAmp.PERF_HIGH] 2 len(self.specs_id)+len(self.params_id)*[100])) . Now,it has runned done.Howenver , it took 24 hours , so amazing~
Hi,@ahmedo42, i'm so sorry to disturb you again.But i faced a tough problem showed below.When I do "run PPO --env opamp-v0 --num_val_specs 250 --traj_len 10000 --no-render",it occurs some error but i don't know how to solve it.In addition,what values should num_val_specs and traj_len take here? I just take them 250 and 10000. `In [1]: run autockt/rollout.py /path/to/ray/checkpoint --run PPO --env opamp-v0 --num_val_specs 250 --traj_len 10000 --no-render
ModuleNotFoundError Traceback (most recent call last) File ~/MuzWong/RL/AutoCkt/autockt/rollout.py:20, in 17 from ray.tune.registry import register_env 19 #from bag_deep_ckt.autockt.envs.bag_opamp_discrete import TwoStageAmp ---> 20 from envs.spectre_vanilla_opamp import TwoStageAmp 22 EXAMPLE_USAGE = """ 23 Example Usage via RLlib CLI: 24 rllib rollout /tmp/ray/checkpoint_dir/checkpoint-0 --run DQN (...) 29 --env CartPole-v0 --steps 1000000 --out rollouts.pkl 30 """ 31 # Note: if you use any custom models or envs, register them here first, e.g.: 32 # 33 # ModelCatalog.register_custom_model("pa_model", ParametricActionsModel) 34 # register_env("pacartpole", lambda : ParametricActionCartpole(10))
ModuleNotFoundError: No module named 'envs.spectre_vanilla_opamp''`
well the traj_len
describes how many simulation steps for the agent to achieve the required specs , in the paper it's 30 I think , the num_val_specs
is how many specs you are using to estimate the performance of the agent, it can be whatever you want but you must generate them first from gen_specs.py
.
As for the error , it's just an import error, i think you just change it to envs.ngspice_vanilla_opamp
thank you very much , @ahmedo42 , because I'm new to reinforcement learning, I don't know a lot about it. Now my training data is stored under ray_results/train_45nm_ngspice, so after I follow the prompt on github "run autockt/rollout.py /path/to/ray/checkpoint --run PPO --env opamp-v0 --num_val_specs 1000 - After -traj_len 30 --no-render" is executed, an error message "Could not find params.json in either the checkpoint dir or its parent directory." will be reported. For this problem, I tried to use the following command "run autockt/rollout.py /home/wangyuan/ray_results/train_45nm_ngspice/PP0_TwoStageAmp_fb6cf_00000_0_2022-02-27_20-37-10 --run PPO --env opamp-v0 --run_val_specs 1000 - -traj_len 30 --no-render" But it still does not work . So I want to ask you how you set it up here.
you need to pass the path to the checkpoint correctly , make sure that there is a directory with the name of the checkpoint and params.json file there in your results directory , something like this /PPO_TwoStageAmp_cba1e_00000_0_2022-03-01_13-23-44/checkpoint_000105/checkpoint-105
Hello, I am studying your open source code, but when I run according to the operation instructions of the reinforcement learning "training agent" part in the readme file, I have the following problem. I don't know how to solve it. I would like to trouble you to reply in time. Very grateful! 1.run autockt/val_autobag_ray.py (PPOTrainer pid=120010) ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=119960, ip=192.168.10.12) (PPOTrainer pid=120010) File "/home/wangyuan/.local/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 464, in init (PPOTrainer pid=120010) _validate_env(self.env, env_context=self.env_context) (PPOTrainer pid=120010) File "/home/wangyuan/.local/lib/python3.9/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1702, in _validate_env (PPOTrainer pid=120010) raise EnvError( (PPOTrainer pid=120010) ray.rllib.utils.error.EnvError: Env's
observation_space
Box([-1. -1. -1. -1. -1. -1. -1. -1. 1. 1. 1. 1. 1. 1. 1.], [0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1.], (15,), float32) does not contain returned observation after a reset ([ 9.9401027e-03 -8.5833985e-01 -7.7145004e-01 8.2071406e-01 (PPOTrainer pid=120010) -1.5965167e-02 5.8320427e-01 0.0000000e+00 6.1827123e-01 (PPOTrainer pid=120010) 3.3000000e+01 3.3000000e+01 3.3000000e+01 3.3000000e+01 (PPOTrainer pid=120010) 3.3000000e+01 1.4000000e+01 2.0000000e+01])!Looking forward to your reply!