Open liwenssss opened 2 years ago
Hi
Sorry for confusion. I updated how to get AGORA annotation files at here. Please let me know if you meet any problems.
Hi, another question. I download AGORA image in 1280x720 and I directly run the dataset of AGORA('train' split) in order to see the cropped image and the bbox of hand. But it seems that there is something wrong .. Should I need extra preprocessing?
Could you explain more? What do you mean by running the dataset of AGORA('train' split)? How did you visualize the hand boxes?
Yes, I just use your debug code:
if lhand_bbox_valid:
_tmp = lhand_bbox.copy().reshape(2, 2)
_tmp[:, 0] = _tmp[:, 0] / cfg.output_hm_shape[2] * cfg.input_img_shape[1]
_tmp[:, 1] = _tmp[:, 1] / cfg.output_hm_shape[1] * cfg.input_img_shape[0]
cv2.rectangle(_img, (int(_tmp[0, 0]), int(_tmp[0, 1])), (int(_tmp[1, 0]), int(_tmp[1, 1])), (255, 0, 0),
3)
cv2.imshow('img', _img)
cv2.waitKey(0)
another settings:
class AGORA(torch.utils.data.Dataset):
def __init__(self, transform, data_split):
self.transform = transform
self.data_split = data_split
self.data_path = '/home/lws/dataset/AGORA/'
self.resolution = (720, 1280) # height, width. one of (720, 1280) and (2160, 3840)
self.test_set = 'val' # val, test
from torch.utils.data.dataloader import DataLoader
dataset = AGORA(torchvision.transforms.ToTensor(), 'train')
data_loader = DataLoader(dataset, batch_size=1, shuffle=False)
for _ in data_loader:
pass
Sorry, give me a second
I fixed all bugs. Thanks!
Hi, thanks for you reply again! Maybe some little error, I am not sure:
Hi, thanks for your nice work. I notice that there are some keys such as 'smplx_joints_2d_path', 'smplx_joints_3d_path' in your annotation files. I have a question about how do you generate the relative files.