Open bernhardschaefer opened 2 years ago
For this version of the tool we used two models: one for recognising nodes and another for recognising connecting objects. The second one is trained on a reduced set of images due to labeling issues.
We labelled connecting objects with bounding boxes. However some connecting objects have a particular layout e.g. they may lead to the design of boxes that include several other elements (e.g. nodes). Including those images in our experiments reduced the performance of the model. Currently we are working to improve the detection of connecting objects solving this issue.
If you want to combine the datasets into a unique one you may combine those annotations that refer to an image that is in both datasets, by doing so you will exclude both the filtered images and the augmented ones (those that have no connecting objects).
Thanks for clarifying this Fabrizio. Do you have the AP numbers when training the node model on the subset of the images? Or alternatively, do you have the coco files and results when training the connecting objects model on the full dataset? Otherwise I cannot do a fair comparison.
One more thing: In the coco files I see "Complex Gateway" as a category, but this category is missing in the results in the README. Did you maybe forget to report the results for this category?
To reduce complexity I also went for a separated approach where I train and evaluate two models, one for recognising nodes and another for recognising connecting objects.
There are two questions that remain from my side:
1) AP-Keypoints
evaluation: how did you set the kpt_oks_sigmas
when creating the COCOEvaluator? The kpt_oks_sigmas is missing in the BPMN_Keypoints_Detection.ipynb notebook, and it's a required parameter in COCOEvaluator when evaluating keypoints.
2) Complex Gateway
: Which bounding box AP did you get for this class?
Thanks in advance!
We do not have the AP numbers for the node model on the subset of the images. As well, we discarded the coco files and results for the connecting objects model on the full dataset. However, we could obtain them. Let's discuss it and see which solution is the best.
Since Complex Gateway were not detected properly we preferred to omit them in this first prototype. However we have solution for that, but it will need us to re-train the model.
Referring to AP-Keypoints evaluation, we set the kpt_oks_sigmas to 1.0. We just noticed that the file on github is not the most updated one, we are going to fix that soon. Thanks for noticing that.
Thank you for the clarification. As I also now train and evaluate two models, I can work with the coco files that are currently available.
I would appreciate if you could update the Jupyter notebook with the training configuration that you used to obtain the results in the README.
First of all: Great work, it's really nice to see more people working on BPMN recognition. :-)
I then realized that the images that are not part of RePROSitory do not contain connecting objects. However, there are many other images that do contain connecting objects, but these are not annotated:
Is there a reason for this? Or am I missing something?
Sketch2BPMN uses one model to recognize both nodes and connecting objects. Therefore, I wanted to combine the two separated coco datasets into one, but that's not possible if only some images have edge annotations.