alexanderswerdlow / BEVGen

BEVGen
MIT License
67 stars 5 forks source link

inference error #5

Closed GallonDeng closed 9 months ago

GallonDeng commented 10 months ago

according to README, when run the inference "python generate.py xxxxxx", it shows the error as follows, File "/home/BEVGen/multi_view_generation/modules/transformer/mask_generator.py", line 90, in get_image_direction_vectors data = torch.load(f'pretrained/camdata{cfg.dataset_name}.pt') File "/opt/conda/envs/bev/lib/python3.10/site-packages/torch/serialization.py", line 699, in load with _open_file_like(f, 'rb') as opened_file: File "/opt/conda/envs/bev/lib/python3.10/site-packages/torch/serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "/opt/conda/envs/bev/lib/python3.10/site-packages/torch/serialization.py", line 211, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'pretrained/cam_data_argoverse.pt'

it needs model 'pretrained/cam_data_argoverse.pt' ? where to find it?

alexanderswerdlow commented 10 months ago

Strange, I was confident I had committed this to the repo but clearly not so sorry about that. I'll try to get the file itself but in the meantime, you can call save_cam_data in multi_view_generation/bev_utils/argoverse.py with any batch of data [the cameras are fixed].

I added the nuScenes data file to the training branch I just pushed since I had that file on hand.

GallonDeng commented 10 months ago

@alexanderswerdlow thanks very much. I have run the inference successfully with the given command python generate.py xxx . But it should use the pretrained/argoverse_stage_two.ckpt/zero_to_fp32.py to replace the one in deepspeed/utils directory to avoid loading model error. Another question, can I use the scripts inference.py to generate synthesized images with a bev layout as an input? is there a simple way to do it?

alexanderswerdlow commented 9 months ago

@AllenDun Yes, you might need to do that for deepspeed. This function should handle things without that but if not, your suggestion might work.

Avoid using scripts/inference.py. I believe that was simply for debugging. I committed a lot of unnecessary files to that branch in the hope they might be useful to someone, but for general inference, use generate.py.

multi_view_generation/scripts/interactive_editing.py is something I'm proud of which makes a web demo for interactive editing, but first regular inference should work for you.

GallonDeng commented 9 months ago

@alexanderswerdlow thanks for your reply. I tried multi_view_generation/scripts/interactive_editing.py, it's awesome. I can edit the annotation type and then the generated the images will change? Or I just have to choose another av2 instance

alexanderswerdlow commented 9 months ago

If I remember correctly, I only made it support moving xy position, not changing the annotation type. Adding support for changing the type shouldn’t be hard though.

You can also change the instance to get a whole new scene.

GallonDeng commented 9 months ago

Thanks, I got it. Nice work, I will do more experiments.