Open fengyuentau opened 1 year ago
hey @fengyuentau , i'd love to contribute to this and need some clarification.
say if I were to add eval script for the PP-HumanSeg model, I should
tools/eval/datasets
load_label
(loads input and output for evaluation) eval
(run the evaluation process) get_result
print_result
eval.py
Am I getting this right? thanks.
@labeeb-7z Basically yes. You can also take a look at https://github.com/opencv/opencv_zoo/pull/70 for reference.
Hey @fengyuentau , I started by looking for datasets for the models, below is what I found and would like your input before prceeding further.
I found this Readme which contains some information about the original model trained by PaddlePaddle. (Consider adding a direct link to this on HumanSeg model page)
They do provide a link to dataset for inferencing, validation and training However Im unsure whether this was the exact dataset used for training the model which is present on opencv_zoo.
I would like a confirmation to proceed with this dataset.
The estimation model is derived(?) from the Palm Detection model as per this blog from mediapipe. So same dataset.
Information about dataset is not mentioned in the blog, however it links a paper they published. The paper describes the dataset they used, but no links to the dataset. The described datasets (In-the-wild and In-house collected gesture) couldn't be found online. I could find some other datasets used for palm detection, should I proceed with them?
The provided refernces did not include any relevant information about the datasets, its of a face detection model(?). Also could not find any relevant information at watrix.ai
again the provided references do not contain any information related to the dataset used for training.
Maybe we can ask the original contributors of respective models for the dataset used in training?
I think including more information about the model(datasets, model architecture etc) in future would be helpful for users as well as developers.
Thank you for all the research! Please, see below.
In PaddleSeg, they provide everything including validation script and data for testing, but no accuracy numbers. The model here in the zoo is converted from PaddlePaddle using this command and you can see the keyword "hrnet18_small_v1" in the filename which is basically the same from PaddleSeg. If you have enough time, you can always get the model, test data and code from their repo and run val.py
to get the accuracy, and validate the one here in the zoo using OpenCV DNN as inference framework and the same test data.
Since they do not provide the dataset, it worths a try with other datasets that are popular and widely used.
This one is adapted from a face detection model and trained with Chinese license plate datasets. I believe you can also find some datasets available online.
This model comes from the WeChat-CV team and I am not positive that they will provide the data. But again it worths a try.
I think including more information about the model(datasets, model architecture etc) in future would be helpful for users as well as developers.
It could be helpful if the information is accurate. Normally we put a link to the source of the model and let people look for the thing they are interested in. Not everyone is willing or able to share datasets due to some limitations and restrictions.
hey @fengyuentau , apologies for the delayed response, I was caught up with college work.
I've submitted a PR for PP_HumanSegmentation model #130 .
For Palm detection and Handpose estimation, I came across the HandNet dataset and think it can be good place to start for evaluation. Other potential datasets can be found here awesome-hand-pose-estimation . Let me know which one should I proceed with.
Hello @labeeb-7z , thank you for the update! I've reviewed the pull request and you could take a look at the comments.
As for the evaluation for palm detection and hand pose estimation, I suggest you take a look at palm detection first because we are planning to upgrade the hand pose estimation from 2D to 3D hand pose output.
Hello @fengyuentau @zihaomu , I was interested in #7 and #8 of opencv's gsoc ideas for this year. I went through the resources and had some points to discuss before writing the proposal.
Is there a forum/mailing list for such discussions? The contributor+mentor mailing list provided on wiki page doesnt seem to work.
Also I see that the ideas for '23 are same as the ones for '22, so I just wanted to confirm whether they'll still be part of '23 gsoc.
Hello @labeeb-7z , the forum/mailing list is https://groups.google.com/g/opencv-gsoc-202x. Some of the ideas are the same because of limited slots and lack of proposals.
Please have discussion on that page. Here is for the discussion on evaluation scripts.
Hey, @fengyuentau I would love to contribute to this issue.
I have some doubts regarding the dataset used in the Text Detection PP-OCRv3 model hence needed clarifications for the same. I was figuring out for the available datasets in this research paper where I found many of them, to test the accuracy of our model can we use any of these datasets or the one mentioned by you in the README(IC15 and TD500) of the project.
@kshitijdshah99 You are welcome to contribute. You can use any dataset as long as it is publicly accessible. We do favor datasets which are popular, for example, COCO dataset for object detection is used for evaluation in basically every object detection paper.
Hello @fengyuentau, I'm also interested in tackling one of these evaluation scripts. Do you have a model that you'd prioritize over the others? 👍 I can take a look and come back with a brief proposal.
I'm interested in the Object Detection Evaluation. If no one else is working on it, can I give it a try? @fengyuentau
I'm interested in the Object Detection Evaluation. If no one else is working on it, can I give it a try? @fengyuentau
Yes, feel free to do so.
We now have over 15 models covering more than 10 tasks in the zoo. Although most of the models are converted to ONNX straightly from its original format, such conversion may potentially lead to drop of accuracy, especially for FP16 and Int8-quantized models. To show the actual accuracy for our users, we now already have some evaluation scripts with the following conditions met in https://github.com/opencv/opencv_zoo/tree/master/tools/eval:
Take a look at the task list below for current status. Feel free to leave a comment for application or discussion before you start to contribute.