Closed wsq1010 closed 3 weeks ago
Hi, why do you want to change the model? They are for refine network and score network respectively. You cannot swap them.
Hello, I see in run_demo that the external parameters of the camera are not used, only the internal parameters of the camera are used. Is there any problem with the coordinates in the camera coordinate system we obtained
Hello, I see in run_demo that the external parameters of the camera are not used, only the internal parameters of the camera are used. Is there any problem with the coordinates in the camera coordinate system we obtained
the pose you get is the object pose in camera coordinate. That's unrelated to extrinsic params (e.g. camera to world).
Hello, is there any way to generate the required mask through obj and other models
This is a separate topic. let's discuss at https://github.com/NVlabs/FoundationPose/issues/88
self.run_name = "2024-01-11-20-02-45"
Traceback (most recent call last): File "run_demo_test.py", line 40, in
refiner = PoseRefinePredictor()
File "/home/bowen/FoundationPose/learning/training/predict_pose_refine.py", line 143, in init
self.model.load_state_dict(ckpt)
File "/opt/conda/envs/my/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for RefineNet: