Closed rginpan closed 1 year ago
In other words, how do you find the groundtruth of object and area potential map during training among a lot of pre-computed groundtruths with respect to the partial maps? I think it should compare the partial map between training and in dataset(random collected). Am I correct?
I'm not sure I follow. The area and object potential functions are computed using the partial and complete semantic maps. During dataset creation, we randomly generate a partial semantic map from a complete semantic map and then compute the corresponding potential functions. I would suggest tracing the code from this line and understanding how the above process is implemented. Happy to answer any further questions.
Thanks, actually, I have read your excellent paper carefully one more time, I think the training is a Image-to-Image model, which is your "interaction free" concept, am I correct? So in the training, you do not need to linear combine two potential functions, just calcuate the loss on predict frontiers with gt. I hope my new understanding is correct...
@rginjapan - yes, you're understanding is correct
How do you compare the difference between the partial map obtained in training and in your pre-prepared dataset? How can you guarantee that the dataset sampled all locations in complete map for training?