Closed Kaszanas closed 3 years ago
Hi, have a look at these instructions. Follow them carefully and you will get that file.
Thank You very much for response @shrubb I am closing this issue and if I will fail at the verbose instructions I will re-open it or message in it being closed.
@shrubb I have noticed that I am only after testing inference on the pretrained model that was uploaded into Google Drive at this point. But it seems that Your training system is complicated and there is no documentation about how to properly perform inference over media input: photos or videos.
Could You provide instructions on how to use the model?
My attempts at loading in the pretrained weights with the following code:
import torch
from mvn.models.triangulation import VolumetricTriangulationNet
from mvn.utils import cfg
config = cfg.load_config("./experiments/human36m/eval/human36m_vol_softmax.yaml")
model = VolumetricTriangulationNet(config)
model.load_state_dict(torch.load("./data/pretrained/human36m/human36m_vol_softmax_10-08-2019/checkpoints/0040/weights.pth"))
model.eval()
Result in the following Error being thrown:
python .\inference.py
Loading pretrained weights from: ./data/pretrained/human36m/pose_resnet_4.5_pixels_human36m.pth
Reiniting final layer filters: module.final_layer.weight
Reiniting final layer biases: module.final_layer.bias
Successfully loaded pretrained weights for backbone
Traceback (most recent call last):
File ".\inference.py", line 9, in <module>
model.load_state_dict(torch.load("D:/Projects/SportAnalytics/src/learnable-triangulation-pytorch/data/pretrained/human36m/human36m_vol_softmax_10-08-2019/checkpoints/0040/weights.pth"))
File "D:\Envs\SportAnalytics\lib\site-packages\torch\nn\modules\module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for VolumetricTriangulationNet:
Missing key(s) in state_dict: "backbone.conv1.weight", [...]
I have spent a good couple of days trying to figure out how Your code works and if there is a built in way of performing inference on any media file but I can't seem to wrap my head around that @shrubb I would really appreciate if You could support me.
Hi, have you managed to run validation on Human3.6M? I mean the command from the original post, python train.py --eval --eval_dataset val ...
If yes, then check how train.py loads the model (I personally don't know this, never worked with that piece of code) and then do the same in your script. You can check it by, for example, running with python -m pdb
instead of python
.
@shrubb
I have made an account to access the dataset some time ago but I have not received any information. So my attempts at validation are currently blocked.
In the meantime I would like to try and run inference on images / video, both monocular and with multiple camera inputs to see the pretrained model in action and to try and export the 3D information out of it.
Hello,
I am attempting to recreate steps that are documented in order to test this model. Running the following command that is specified in the documentation:
Returns the following error:
This seems odd as I am unable to find the specified file within the shared files on Google Drive to be able to run the model evaluation.
If You have any solutions to this problem please let me know.