Closed suyunzzz closed 2 years ago
Thanks for the contribution! The newly added visualizer is not used right now. Is open3d
an alternative to mlab
? If so, could you please add an option so that we can choose the visualization backend.
hello,zhijian, I am a git newbie, i dont know whether "push -f" is right, if there are some mistake please tell me,:)🤣
Hello, @suyunzzz I've downloaded the file which you updated yesterday. And I try to run the visualize code by default command:
python visualize.py
But I encounter some error about:
File "visualize.py", line 360, in
outputs = model(inputs) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 84])
The parse argument is:
Model: SemanticKITTI_val_SPVNAS@65GMACs (default) Velodyne_dir: sample_data(default) visualize_backend: open3d(default)
Do you know how to solve this problem? Hope you can give me some advise. Thanks a lot!! 谢谢!
Hi @suyunzzz, thanks for the updates. Could you please remove init
and all .pyc
files? Also, the sample data is now stored under assets
. Please remove sample_data
as well. Thank you!
Btw, could you also reformat the code according to pre-commit
(see https://github.com/mit-han-lab/spvnas/runs/6327544860?check_suite_focus=true for more details)? Thanks!
Hello, @suyunzzz I've downloaded the file which you updated yesterday. And I try to run the visualize code by default command:
python visualize.py
But I encounter some error about:
File "visualize.py", line 360, in outputs = model(inputs) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 84])
The parse argument is:
Model: SemanticKITTI_val_SPVNAS@65GMACs (default) Velodyne_dir: sample_data(default) visualize_backend: open3d(default)
Do you know how to solve this problem? Hope you can give me some advise. Thanks a lot!! 谢谢!
@QuasarsZ, could you please double-check the input size: inputs.F.shape
?
Hello, @zhijian-liu. Thanks for your response.
I run the visualize code with:
python visualize.py --velodyne-dir ./dataset/semantic-kitti/00/velodyne/
And the input size is:
torch.Size([91852, 4])
Is my velodyne-dir set incorrectly?
Hello, @zhijian-liu. Thanks for your response.
I run the visualize code with:
python visualize.py --velodyne-dir ./dataset/semantic-kitti/00/velodyne/
And the input size is:
torch.Size([91852, 4])
Is my velodyne-dir set incorrectly?
Does it work if you use a original visualize.py?
Btw, could you also reformat the code according to
pre-commit
(see https://github.com/mit-han-lab/spvnas/runs/6327544860?check_suite_focus=true for more details)? Thanks! hello, zhijian, I am not sure that is need format code manually? or automatic?🤣
Hi, @suyunzzz. Thank's for your reply.
Run original visualize.py will get another error. (#90)
Then, I only modify this:
inds, labels, inverse_map = sparsequantize(pc,
feat_,
voxel_size,
return_index=True, return_inverse=True)
to
coords_, inds, inverse_map = sparsequantize(pc, return_index=True, return_inverse=True)
And I will get error message about:
File "visualize.py", line 360, in outputs = model(inputs) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 84])
@suyunzzz To use precommit,
pip install pre-commit
pre-commit install
pre-commit run -a
It should reformat the code and then format the code before every commit.
@suyunzzz To use precommit,
pip install pre-commit pre-commit install pre-commit run -a
It should reformat the code and then format the code before every commit.
thanks, it seems work😄
Fix a bug in
visualize.py
and add Open3D-based visualization.