Closed LjIA26 closed 1 year ago
The outputs of pose estimator can be accessed here if you use the demo script demo/topdown_demo_with_mmdet.py
:
https://github.com/open-mmlab/mmpose/blob/2c4a60e357c7f68f3388f0fc4ddcb404674b81ae/demo/topdown_demo_with_mmdet.py#L45
Thank you for your answer, is there a way to edit the test file so I can get the keypoints output for many images?
When you mention a 'test file', are you referring to tools/test.py
? If your objective is to obtain poses from a collection of images, our demo script can indeed help you accomplish this task.
The demo script seems to only parse one image at a time, how can I make it work for a collection of images?
The inferencer demo supports using folder path as input and will generate the pose outputs of all images in the folder. Example: https://mmpose.readthedocs.io/en/latest/demos.html#d-human-whole-body-pose-estimation-with-inferencer
Thank you for your answer. How can I apply this model for a custom model? My skeleton is not human, it's plants.
Hello,
It doesnt work for custom models, is there a way to make it work?
Could you clarify what you mean by "doesn't work for custom models"? Are you encountering errors, detection issues, or poor pose estimation results?
When I write the following command on the command line
python demo/inferencer_demo.py data/building/images-test --pose2d configs/plants8.py --pose2d-weights D:/Train-J12/best_coco_AP_epoch_50.pth --vis-out-dir vis_results/
I get:
Traceback (most recent call last):
File "D:\mtndew\mmpose\demo\inferencer_demo.py", line 143, in <module>
main()
File "D:\mtndew\mmpose\demo\inferencer_demo.py", line 137, in main
inferencer = MMPoseInferencer(**init_args)
File "d:\mtndew\mmpose\mmpose\apis\inferencers\mmpose_inferencer.py", line 88, in __init__
self.inferencers['pose2d'] = Pose2DInferencer(
File "d:\mtndew\mmpose\mmpose\apis\inferencers\pose2d_inferencer.py", line 94, in __init__
super().__init__(
File "C:\Users\LJ\anaconda3\envs\myclone\lib\site-packages\mmengine\infer\infer.py", line 160, in __init__
cfg, _weights = self._load_model_from_metafile(model)
File "C:\Users\LJ\anaconda3\envs\myclone\lib\site-packages\mmengine\infer\infer.py", line 393, in _load_model_from_metafile
raise ValueError(f'Cannot find model: {model} in {self.scope}')
ValueError: Cannot find model: configs/plants8.py in mmpose
I haven't been able to make it work, any thoughts?
Does the config file configs/plants8.py
exist? The code snippet at
https://github.com/open-mmlab/mmengine/blob/main/mmengine/infer/infer.py#L154-L162
if isinstance(model, str):
if osp.isfile(model):
cfg = Config.fromfile(model)
else:
# Load config and weights from metafile. If `weights` is
# assigned, the weights defined in metafile will be ignored.
cfg, _weights = self._load_model_from_metafile(model)
if weights is None:
weights = _weights
seems to suggest otherwise
Yes! It's the same used during training, same location also
I suspect the bug might be due to path representation in Windows
I used the full path also, and it didn't work
also, if I use the config file generated by the training, I get a different error:
Traceback (most recent call last):
File "D:\mtndew\mmpose\demo\inferencer_demo.py", line 143, in <module>
main()
File "D:\mtndew\mmpose\demo\inferencer_demo.py", line 138, in main
for _ in inferencer(**call_args):
File "d:\mtndew\mmpose\mmpose\apis\inferencers\mmpose_inferencer.py", line 178, in __call__
inputs = self._inputs_to_list(inputs)
File "d:\mtndew\mmpose\mmpose\apis\inferencers\base_mmpose_inferencer.py", line 125, in _inputs_to_list
input_type = mimetypes.guess_type(inputs)[0].split('/')[0]
AttributeError: 'NoneType' object has no attribute 'split'
The isdir
and isfile
functions seem to behave unexpectedly on Windows. This scenario should not occur if you're inputting a folder path. I will check this problem tomorrow
Thank you!
Sorry but I cannot reproduce this bug on my windows computer. The inferencer works fine with folder path input. Without more detailed information, my ability to assist further is limited.
I am using a custom dataset, it's not human I wonder if that's te problem
During training I used the coco training pipeline and did not register my custom dataset as it wasn't necessary.
Do you think this is the problem ?
Indeed, I had typed wrong the test data loader information and that's why it did not work.
Thank you!
📚 The doc issue
Hello, I would like to do some post processing with the predictions but I don’t seem to be able to get the keypoints predictions, in the demo, inference or testing set. Is there a way to do this ? I remember this was a feature in an earlier version of mmpose.
I would like to get the x, y outputs.
thank you for your help
Suggest a potential alternative/fix
No response