Open ravitejageeda opened 4 years ago
You can check this part (https://github.com/facebookresearch/InterHand2.6M#running-internet)
Why annotation subset .json file is given while testing ? and to test the model on single image do we need to do any transformation ?
To know image names to load and hand bboxs. GT poses are also loaded for the evaluation.
Could you check the code? https://github.com/facebookresearch/InterHand2.6M/blob/master/data/InterHand2.6M/dataset.py
I am using Google Colab to test this and colab has the limitation with drive size.
I downloaded only InterHand2.6M.images.5.fps.v0.0.tar.partaa to test the model. (https://github.com/facebookresearch/InterHand2.6M/releases/download/v0.0/InterHand2.6M.images.5.fps.v0.0.tar.partaa ) I could get only images/test/Capture1. I kept the file properly in images path.
I downloaded annotations from (https://github.com/facebookresearch/InterHand2.6M/releases/download/v0.0/InterHand2.6M.annotations.5.fps.zip) and kept annotations path properly.
I downloaded pretrained model InterNet.trained.on.InterHand2.6M.v0.0.zip (https://github.com/facebookresearch/InterHand2.6M/releases/download/v0.0/InterNet.trained.on.InterHand2.6M.v0.0.zip) and kept snapshot_19.pth.tar (from one of annotated H , M , H+M) it in output/model_dump.
while running "python test.py --gpu 0 --test_epoch 19 --test_set test --annot_subset all" I could see that annotations are there for the full dataset. I could not able to open large json file because of resources problem.
I have modified vis=True in dataset.py at the two places.
Also can I be able to use individual dataset splits for training instead of whole dataset?
If I want to test on some random image, Will it be sufficient If I pass the model on the image and passing the bounding box of the hand in coco format ??
Using GPU: 0
09-27 17:15:42 Creating test dataset...
Load annotation from ../data/InterHand2.6M/annotations/all
loading annotations into memory...
Done (t=14.02s)
creating index...
index created!
Get bbox and root depth from ../data/InterHand2.6M/rootnet_output/rootnet_interhand2.6m_output_all_test.json
Number of annotations in single hand sequences: 197992
Number of annotations in interacting hand sequences: 154905
09-27 17:16:30 Load checkpoint from /content/InterHand2.6M/main/../output/model_dump/snapshot_19.pth.tar
09-27 17:16:30 Creating graph...
0% 0/11029 [00:00<?, ?it/s]Traceback (most recent call last):
File "test.py", line 78, in
The zipped image files are uniformly and randomly zipped. You should download all zipped chunks and decompress them. I don't know how you come up with InterHand2.6M.images.5.fps.v0.0.tar.partaa
means Capture 0
, but that is not true.
I cannot get what you mean by split
, but the annotations files are already splitted to train/test/val
and all/human_annot/machine_annot
.
Hi, I used this command to get only one part, "!wget https://github.com/facebookresearch/InterHand2.6M/releases/download/v0.0/InterHand2.6M.images.5.fps.v0.0.tar.partaa"
and after that I used this command to unpack it, "!cat InterHand2.6M.images.5.fps.v0.0.tar.partaa | tar -xvf - -i"
and the result is something like this:
InterHand2.6M_5fps_batch0/images/test/Capture1/ROM04_LT_Occlusion/cam400282/image18829.jpg InterHand2.6M_5fps_batch0/images/test/Capture1/ROM04_LT_Occlusion/cam400431/ InterHand2.6M_5fps_batch0/images/test/Capture1/ROM04_LT_Occlusion/cam400431/image18469.jpg InterHand2.6M_5fps_batch0/images/test/Capture1/ROM04_LT_Occlusion/cam400431/image19171.jpg.....
You should download all zipped chunks and unzip them. Otherwise, you can meet the image not exist error that you reported above.
So, I understood that there is no individual json file for each individual zipped chunk. The annotations for all together.. Thanks for the clarification. I will try run with the full file.
On Wednesday, September 30, 2020, Gyeongsik Moon notifications@github.com wrote:
You should download all zipped chunks and unzip them. Otherwise, you can meet the image not exist error that you reported above.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/facebookresearch/InterHand2.6M/issues/5#issuecomment-701200821, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOUFJKD7D4NFYVCDHCPOVNDSILJZ3ANCNFSM4Q4TETPA .
yes. human_annot is small, so you can try that one.
Hii, I am trying to test your results on RHD dataset and I could get result of only left hand. Is there any way I could get right hand results without left hand? Is it possible to get both hand results at once?
also would like to ask the same on InterHand dataset as well.
I tried tried selecting only right hand but could not work.
That is because only left hand is a test hand of that image.
joint_valid
of right hands are zero.
The input image only contains left hand in this case.
To know image names to load and hand bboxs. GT poses are also loaded for the evaluation.
I find the bbox in one image is a 4 element array, what does it stands for? If we want use a webcam to pass real images to the model, how can I generate this bbox?
If you read here, you can find it stands for (xmin, ymin, width, height). Unfortunately, I did not implement the hand bbox detector. I'm developing a new algorithm that contains the hand bbox detector. For now, you may want to use object detection algorithm, such as detectron2
If you read here, you can find it stands for (xmin, ymin, width, height). Unfortunately, I did not implement the hand bbox detector. I'm developing a new algorithm that contains the hand bbox detector. For now, you may want to use object detection algorithm, such as detectron2
Got you. Thanks.
Hii...when I am trying to run the test.py I see the following error.. "module 'torch.cuda' has no attribute 'comm'" I am running on google colab Can you help?
Using GPU: 0 11-10 13:31:41 Creating test dataset... loading annotations into memory... Done (t=0.23s) creating index... index created! Get bbox and root depth from ../data/RHD/rootnet_output/rootnet_rhd_output.json 11-10 13:31:42 Load checkpoint from /content/InterHand2.6M/main/../output/model_dump/snapshot_49.pth.tar 11-10 13:31:42 Creating graph... 0% 0/2 [00:03<?, ?it/s] Traceback (most recent call last): File "test.py", line 78, in
main() File "test.py", line 61, in main out = tester.model(inputs, targets, meta_info, 'test') File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 159, in forward return self.module(*inputs[0], *kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "/content/InterHand2.6M/main/model.py", line 48, in forward joint_heatmap_out, rel_root_depth_out, hand_type = self.pose_net(img_feat) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/content/InterHand2.6M/main/../common/nets/module.py", line 56, in forward root_depth = self.soft_argmax_1d(root_heatmap1d).view(-1,1) File "/content/InterHand2.6M/main/../common/nets/module.py", line 43, in soft_argmax_1d accu = heatmap1d torch.cuda.comm.broadcast(torch.arange(cfg.output_root_hm_shape).type(torch.cuda.FloatTensor), devices=[heatmap1d.device.index])[0] AttributeError: module 'torch.cuda' has no attribute 'comm'
I haven't used colab.. On local machines, it works fine. You'd better google it.
I get the same error... AttributeError: module 'torch.cuda' has no attribute 'comm'
I'm using torch 1.7.0
I changed the code. It will work fine thanks!
@mks0601 Thankyou , I have figured that earlier... When I try to pass a random Image I am trying to change configuration to trans_test = gt in .config file and trying to upload my json in dataset.py through self.annot_path and commented out self rootnet_output_path....I could see that only bbox information is not enough in json file and we need to pass all Joint_img,Joint_cam,Joint_valid,cam_param,princpt etc in the json file to get the output.... When we try to test a random image we will not have all the info...is there any option that I pass only a bbox as input and get the prediction and 3d output model .?
Hi guys. I added a demo code for a random image.
Thank you very much
How do i run this on test image ? and How can I get two hands separately ?