Closed dshahrokhian closed 6 years ago
Hi Dani,
We do not have this released yet, but are working on releasing a clean modular version of the code. Stay tuned!
Best, Amlan
May I ask how many interactive annotations were performed in order to achieve the level of performance that can be seen in the examples of this repository? Because I am currently trying my own object crops and it is not performing so well :S
I think the examples are generated without any hand-annotation? When I am running the inference code, the network generates nearly the same result on the examples.
Yes, that is right. I did not mean the examples provided, but my own instances. If you look at the paper, they have included a human-in-the-loop for improving the polygon estimates given different objects (in the example, a car). My concern is how many times the interactive annotation was performed for that object class.
We created two scenarios to evaluate the performance of PolygonRNN++ in interactive mode. The first one aims to simulate a human-in-the-loop. As explained in the paper, for this regime, we correct a prediction if it deviates from the corresponding GT vertex by a min distance T(hyperparameter that governs the quality of the produced annotations). Figure 6,7,8 show the details including the average number of interactions(clicks) per object instance to achieve a desired quality. The second regimen, is a small scale experiment, with an earlier version of the tool. Since this was a real experiment (not simulated), we focused on measuring the time it takes to the annotators to fix a given instance (if need it) to achieve the desired quality.
Excellent repo :+1: Could you please provide some guidance on how to fine-tune the network for new objects?
Thanks, Dani