MahanFathi / Model-Based-RL

Model-based Policy Gradients
29 stars 4 forks source link

ModuleNotFoundError: No module named 'optimizer' #3

Open shubham83183 opened 2 years ago

shubham83183 commented 2 years ago

Hi, Hope you are doing well. I tried run this code, but I am getting following error. Please let me know what I need to do to fix it.

(VANILLA) shubham@shubham-VirtualBox:~/Model-Based-RL$ python3 main.py --config-file ./configs/inverted_pendulum.yaml Traceback (most recent call last): File "main.py", line 5, in import model.engine.trainer File "/home/shubham/Model-Based-RL/model/init.py", line 1, in from .build import build_model File "/home/shubham/Model-Based-RL/model/build.py", line 2, in from model import archs File "/home/shubham/Model-Based-RL/model/archs/init.py", line 1, in from .basic import Basic File "/home/shubham/Model-Based-RL/model/archs/basic.py", line 3, in from model.blocks import build_policy, mj_torch_block_factory File "/home/shubham/Model-Based-RL/model/blocks/init.py", line 1, in from .policy import build_policy File "/home/shubham/Model-Based-RL/model/blocks/policy/init.py", line 3, in from .trajopt import TrajOpt File "/home/shubham/Model-Based-RL/model/blocks/policy/trajopt.py", line 4, in from .strategies import * File "/home/shubham/Model-Based-RL/model/blocks/policy/strategies.py", line 9, in from optimizer import Optimizer ModuleNotFoundError: No module named 'optimizer'

amdee commented 1 year ago

@aikkala by any chance can you add the "optimizer" file in the main branch? Thanks

aikkala commented 1 year ago

Hello @amdee. I don't unfortunately have this project locally anymore, so if the optimizer file is missing from this repo, then I can't upload it. Perhaps you're able to reverse engineer the functionality. It most likely is just some wrapper around a torch optimizer.

amdee commented 1 year ago

@aikkala Thanks for the quick reply. I think I can reverse engineer the optimizer but I wasn't sure what it was doing especially when it calls the method .ask() and tell(), which I assume are methods. Would it be possible to give me insight into what the module optimizer is doing? Any info that you might think will help reverse engineer the module will be appreciated.

aikkala commented 1 year ago

@amdee -- this time I wasn't so fast with the reply, sorry.

Now that you mentioned the ask and tell methods, I think that optimizer may have been a CMA-ES optimizer. Or perhaps it was a wrapper that provided an interface for both a CMA-ES and a torch optimizer. But the ask and tell methods point to a CMA-ES (or some other black box) optimizer, where ask method returns a bunch of solutions, and tell method expects a value (e.g. fitness function value) for those solutions.