Open Ruihyw opened 1 month ago
I have installed the extension but it still shows that : [Warning] Pre-trained ResNet50 models cannot be used since mask2former not found And it shows that the model dictionary is wrong like the pic above
I found the problem, the semantic segmentation module is able to read the realsense image data, but the output is all black, I used the model and parameter files recommended in Install.md, but still get this result, I'm very confused don't know how to modify it:(
print("Result unique values:", np.unique(result)) output: Result unique values: [133] This means that the model partitions out only one type
Hi, This is an issue with the mask2former model, more specifically, that the downloaded weights do not fit the used architecture (I guess there was a renaming of part of the network). If I remember correctly, then this was a version issue of mmcv and mmengine, as they renamed parts of the network without updating the weights. Can you make sure that you have following versions of the packages:
mmcv 2.0.0 mmdet 3.1.0 mmengine 0.8.4
Hi, This is an issue with the mask2former model, more specifically, that the downloaded weights do not fit the used architecture (I guess there was a renaming of part of the network). If I remember correctly, then this was a version issue of mmcv and mmengine, as they renamed parts of the network without updating the weights. Can you make sure that you have following versions of the packages:
mmcv 2.0.0 mmdet 3.1.0 mmengine 0.8.4
hi, how can i get model config about image segemantation as i just see a model included its weights in ros install
Hi, This is an issue with the mask2former model, more specifically, that the downloaded weights do not fit the used architecture (I guess there was a renaming of part of the network). If I remember correctly, then this was a version issue of mmcv and mmengine, as they renamed parts of the network without updating the weights. Can you make sure that you have following versions of the packages: mmcv 2.0.0 mmdet 3.1.0 mmengine 0.8.4
hi, how can i get model config about image segemantation as i just see a model included its weights in ros install
hi! For me,I have following versions of the packages mmcv 2.0.0 mmdet 3.1.0 mmengine 0.8.4 The model named "Swin-B" workd! You can get these data here https://github.com/open-mmlab/mmdetection/tree/master/configs/mask2former
Hi, This is an issue with the mask2former model, more specifically, that the downloaded weights do not fit the used architecture (I guess there was a renaming of part of the network). If I remember correctly, then this was a version issue of mmcv and mmengine, as they renamed parts of the network without updating the weights. Can you make sure that you have following versions of the packages:
mmcv 2.0.0 mmdet 3.1.0 mmengine 0.8.4
Thank you so much! I changed the model,and it's workd! Now,should i use the "LIO" or "VIO" or anything else to provide the "odom"?
That is up to you, basically the odom that is more precise.
Hi!Sry to bother you again! I'm having some problems deploying on jetson ros. I created the container using dockerfile, after configuring the environment I connected the realsense D435i camera and modified the topic, color and depth to odom coordinate mapping I published the static transformation using "tf", then I started the program and no error was reported. But when I try to view /viplanner/sem_image in rviz I find that the image is all black, I use rostopic echo to see that there is content output for the topic but no matter how I move the camera the content remains the same! "success" means that /viplanner/sem_image/Compressed has been successfully published.The reported error also stopped appearing after I released the static transforms Topic has output but rviz can't display it!