Closed tarekdawey closed 10 months ago
Hmm that is weird, I just checked the .hydra/config.yaml file and don't see anything that calls a "lfp" module. Could you pinpoint which line this is ?
Also, can you try just python hulc/evaluation/evaluate_policy.py --dataset_path <PATH/TO/DATASET> --train_folder <PATH/TO/TRAINING/FOLDER>
by default, the evaluation loads the last checkpoint in the training log directory, so you don't need to specify the checkpoint for the pre-trained models.
When did you download the pretrained model and did you keep hulc and calvin up to date? This error is really strange because we removed the dependencies to lfp ages ago.
Same issue here. Did you find a solution?
@DaniAffCH same question for you, did you recently download the pretrained checkpoint? When did you keep calvin and hulc up to date?
I think it's a calvin problem. Here the steps I followed:
So your problem is not related to the HULC_D_D checkpoint? Can you maybe find out in which file the line lfp.utils.transforms.NormalizeVector
is specified?
Thank you for your support @lukashermann
The problem arises in calvin/calvin_models/calvin_agent/datasets/calvin_data_module.py
line 80. The line contains:
self.train_transforms = {
cam: [hydra.utils.instantiate(transform) for transform in transforms.train[cam]] for cam in transforms.train
}
Where the set of transformations to be applied are loaded. Apparently, among these transformations, there is lfp.utils.transforms.NormalizeVector
.
By grepping it shows up only in dataset/calvin_debug_dataset/validation/lang_annotations/embeddings.npy
It would be helpful to get more information. can you make a breakpoint after this line
and print the content of self.transforms
and transforms
Here is the print of cam and transforms. It's not possible to print self.transforms
because the code fails inside the list comprehension so the variable doesn't get set.
rgb_static {'_target_': 'torchvision.transforms.Resize', 'size': 200}
rgb_static {'_target_': 'calvin_agent.utils.transforms.ScaleImageTensor'}
rgb_static {'_target_': 'torchvision.transforms.Normalize', 'mean': [0.5], 'std': [0.5]}
rgb_static {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
rgb_gripper {'_target_': 'torchvision.transforms.Resize', 'size': 84}
rgb_gripper {'_target_': 'calvin_agent.utils.transforms.ScaleImageTensor'}
rgb_gripper {'_target_': 'torchvision.transforms.Normalize', 'mean': [0.5], 'std': [0.5]}
rgb_gripper {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
depth_static {'_target_': 'torchvision.transforms.Resize', 'size': 200}
depth_static {'_target_': 'calvin_agent.utils.transforms.AddDepthNoise', 'shape': [1000.0], 'rate': [1000.0]}
depth_static {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
depth_gripper {'_target_': 'torchvision.transforms.Resize', 'size': 84}
depth_gripper {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
rgb_tactile {'_target_': 'torchvision.transforms.Resize', 'size': 70}
rgb_tactile {'_target_': 'torchvision.transforms.RandomCrop', 'size': 64}
rgb_tactile {'_target_': 'calvin_agent.utils.transforms.ScaleImageTensor'}
rgb_tactile {'_target_': 'torchvision.transforms.Normalize', 'mean': [0.5], 'std': [0.5]}
rgb_tactile {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
depth_tactile {'_target_': 'torchvision.transforms.Resize', 'size': 64}
depth_tactile {'_target_': 'torchvision.transforms.Normalize', 'mean': [0.1], 'std': [0.2]}
depth_tactile {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
robot_obs {'_target_': 'calvin_agent.utils.transforms.NormalizeVector'}
robot_obs {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
robot_obs {'_target_': 'lfp.utils.transforms.NormalizeVector', 'mean': [0.039233, -0.118554, 0.507826, 1.079174, -0.083069, 1.579753, 0.054622, -0.736859, 1.017769, 1.792879, -2.099604, -0.993738, 1.790842, 0.586534, 0.095367], 'std': [0.150769, 0.1104, 0.06253, 2.883517, 0.126405, 0.377196, 0.030152, 0.334392, 0.172714, 0.240513, 0.3842, 0.198596, 0.158712, 0.346865, 0.995442]}
scene_obs {'_target_': 'calvin_agent.utils.transforms.NormalizeVector'}
scene_obs {'_target_': 'calvin_agent.utils.transforms.AddGaussianNoise', 'mean': [0.0], 'std': [0.01]}
scene_obs {'_target_': 'lfp.utils.transforms.NormalizeVector', 'mean': [0.150934, 0.119917, 0.000239, 0.042049, 0.487755, 0.47448, 0.057482, -0.088074, 0.431237, 0.046034, 0.030599, 0.027333, 0.062103, -0.092833, 0.430236, -0.054962, 0.019381, 0.096546, 0.064944, -0.093058, 0.428381, 0.024941, 0.002746, -0.031589], 'std': [0.125757, 0.09654, 0.002148, 0.041916, 0.49985, 0.499348, 0.146225, 0.119266, 0.050408, 1.430807, 0.676023, 2.017468, 0.142979, 0.113236, 0.049651, 1.545888, 0.3906, 1.763569, 0.143077, 0.11546, 0.050363, 1.514873, 0.431664, 1.860245]}
ok, can you send the content of the statistics.yaml
in the training
and validation
folder of your debug dataset?
in that line I sent in my previous reply, the transforms config file gets merged with the statistics.yaml
from the dataset, so the faulty line has to come from either one of them.
Indeed you are right, both the training and the validation contain the lfp library. Here the training:
robot_obs:
- _target_: lfp.utils.transforms.NormalizeVector
mean: [0.039233, -0.118554, 0.507826, 1.079174, -0.083069, 1.579753,
0.054622, -0.736859, 1.017769, 1.792879, -2.099604, -0.993738,
1.790842, 0.586534, 0.095367]
std: [0.150769, 0.1104 , 0.06253 , 2.883517, 0.126405, 0.377196,
0.030152, 0.334392, 0.172714, 0.240513, 0.3842 , 0.198596,
0.158712, 0.346865, 0.995442]
scene_obs:
- _target_: lfp.utils.transforms.NormalizeVector
mean: [0.150934, 0.119917, 0.000239, 0.042049, 0.487755, 0.47448 ,
0.057482, -0.088074, 0.431237, 0.046034, 0.030599, 0.027333,
0.062103, -0.092833, 0.430236, -0.054962, 0.019381, 0.096546,
0.064944, -0.093058, 0.428381, 0.024941, 0.002746, -0.031589]
std: [ 0.125757, 0.09654 , 0.002148, 0.041916, 0.49985 , 0.499348,
0.146225, 0.119266, 0.050408, 1.430807, 0.676023, 2.017468,
0.142979, 0.113236, 0.049651, 1.545888, 0.3906 , 1.763569,
0.143077, 0.11546 , 0.050363, 1.514873, 0.431664, 1.860245]
act_min_bound: [-0.432188, -0.545456, 0.293439, -3.141593, -0.811348, -3.141573, -1. ]
act_max_bound: [0.42977 , 0.139396, 0.796262, 3.141592, 0.638583, 3.141551, 1. ]
Yes that's what I suspected. You must have an old version of the debug dataset, so you can download it again, or you just replace lfp.utils.transforms.NormalizeVector
with calvin_agent.utils.transforms.NormalizeVector
in both statistics.yaml
files.
Ok got it. The problem is in the file calvin_models/calvin_agent/datasets/calvin_data_module.py
which has a hardcoded link that contains an outdated dataset.
Thank you very much for your support.
Ah I see, thanks for spotting that. I will remove the wrong link!
Hi,
Thanks for your great work. I am trying to evaluate the pretrained model and I run: python hulc/evaluation/evaluate_policy.py --dataset_path /home/systemtec/hulc/dataset/task_D_D --train_folder /home/systemtec/hulc/checkpoints/HULC_D_D --checkpoint /home/systemtec/hulc/checkpoints/HULC_D_D/saved_models/HULC_D_D.ckpt --debug
pybullet build time: May 20 2022 19:44:17 Global seed set to 0 ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/systemtec/mambaforge/envs/hulc_venv/lib/python3.8/site-packages/hydra/ │ │ _internal/utils.py:570 in _locate │ │ │ │ 567 │ for n in reversed(range(len(parts))): │ │ 568 │ │ try: │ │ 569 │ │ │ mod = ".".join(parts[:n]) │ │ ❱ 570 │ │ │ module = import_module(mod) │ │ 571 │ │ except Exception as e: │ │ 572 │ │ │ if n == 0: │ │ 573 │ │ │ │ raise ImportError(f"Error loading module '{path}'") fr │ │ │ │ ╭───────────────────────────── locals ──────────────────────────────╮ │ │ │ builtins = <module 'builtins' (built-in)> │ │ │ │ import_module = <function import_module at 0x7f2625708040> │ │ │ │ mod = '' │ │ │ │ module = None │ │ │ │ n = 0 │ │ │ │ parts = ['lfp', 'utils', 'transforms', 'NormalizeVector'] │ │ │ │ path = 'lfp.utils.transforms.NormalizeVector' │ │ │ ╰───────────────────────────────────────────────────────────────────╯ │ │ │ │ /home/systemtec/mambaforge/envs/hulc_venv/lib/python3.8/importlib/init.p │ │ y:127 in import_module │ │ │ │ 124 │ │ │ if character != '.': │ │ 125 │ │ │ │ break │ │ 126 │ │ │ level += 1 │ │ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │ │ 128 │ │ 129 │ │ 130 _RELOADING = {} │ │ │ │ ╭──── locals ────╮ │ │ │ level = 0 │ │ │ │ name = '' │ │ │ │ package = None │ │ │ ╰────────────────╯ │ │ in _gcd_import:1011 │ │ ╭──── locals ────╮ │ │ │ level = 0 │ │ │ │ name = '' │ │ │ │ package = None │ │ │ ╰────────────────╯ │ │ in _sanity_check:950 │ │ ╭──── locals ────╮ │ │ │ level = 0 │ │ │ │ name = '' │ │ │ │ package = None │ │ │ ╰────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────╯ ValueError: Empty module name ImportError: Error loading module 'lfp.utils.transforms.NormalizeVector'