Closed Eric-yuuki closed 8 months ago
Hi,
My question is that whether you try other RAW data which doesn't appeared in the cycleR2R training process so that the performance still surpass the rgb base line.
If I've understood your question correctly, you're inquiring about the potential of training the CycleR2R model on RAW data from a different camera (RAW_{other cam}) and RGBb, followed by training a detector on the generated simRAW{other cam}, and ultimately testing it on simRAWi. The answer is no, we have not conducted such tests. Our primary objective is to utilize large-scale labeled RGB data and unlabeled RAW data from the target camera (RAW{target cam}) to train a detector specifically for the target camera's RAW images.
So there is a doupt that the reason it surpasses rgb baseline is the style of simRAWb is the same as the RAWi, which means the detector has seen the style of testing data in advance.
To partially address your point, yes, one reason our approach might surpass baseline models is that simRAW_i closely resembles RAW_i more than the RGB baselines do. This observation supports the hypothesis that the similarity in style between simRAW_i and RAW_i contributes to our model's performance advantage. Additionally, as detailed in our supplementary materials, testing on different Image Signal Processor (ISP) outputs of RGB can result in significantly varied accuracy rates, due to the stylistic disparities introduced by different ISPs.
Hello,
In the paper, all the domain adaptation and the cycler2r methods use RAWi and RGBb to train.
Then, the detectors are trained using the simRAWb generated by the cycler2r or other da methods.
And the testing dataset is the RAWi again.
My question is that whether you try other RAW data which doesn't appeared in the cycleR2R training process so that the performance still surpass the rgb base line.
So there is a doupt that the reason it surpasses rgb baseline is the style of simRAWb is the same as the RAWi, which means the detector has seen the style of testing data in advance.