Closed billyzju closed 4 years ago
Sure the pretrained model for egocentric view will be provided in the next few days.
Pretrained model is added in the README file instruction.
Pretrained model is added in the README file instruction.
OK, that's very nice of you. Is that possible to test on my own image by the way? Thank you.
Yes, make the .npy
file for your dataset and test it on that particular dataset.
Yes, make the
.npy
file for your dataset and test it on that particular dataset.
To test on my own dataset, do I need to provide the camera external/internal references? Like here: https://github.com/bardiadoosti/HOPE/blob/8ea544d36847dfd660e5c409695376595c89920f/datasets/fhad/make_data.py#L51
For 3D annotation you need these matrices. But if you just want to test the model on a 2D annotated dataset, put a place holder for 3D labels (maybe a zero vector) and manually remove the 3D term for the loss. That will just calculate the 2D error for you.
Hi @bardiadoosti. Thank you for sharing your code. Here is my inference code:
` model.load_state_dict(torch.load(args.pretrained_model))
image_path = os.path.join(cwd, 'data/images') image_name = 'frame_029.jpg' image_file = os.path.join(image_path, image_name) image = Image.open(image_file) img = transform(image) img = img.unsqueeze(0)
inputs = img
inputs = Variable(inputs)
if use_cuda and torch.cuda.is_available(): inputs = inputs.float().cuda(device=args.gpu_number[0]) with torch.no_grad(): outputs2d_init, outputs2d, outputs3d = model(inputs)
outputs2d = outputs2d.cpu().numpy() outputs2d_init = outputs2d_init.cpu().numpy() print(outputs2d.shape) f,ax = plt.subplots(1,1, figsize=(10, 10)) ax.imshow(image) ax.scatter(outputs2d[:,:,0], outputs2d[:,:,1]) save_image_file = os.path.join(image_path, '{}_out.jpg'.format(image_name.split('.')[0])) plt.savefig(save_image_file)`
And the result is shown in the image below. Do you have any suggestion?
Could you please provide the pretrained model? Thanks!