Open kdongyi opened 10 months ago
The Attention-RPN paper specifically states that fine-tuning is not required, but why does the README suggest fine-tuning?
The Attention-RPN paper specifically states that fine-tuning is not required, but why does the README suggest fine-tuning?