I'm very interested about the github course.
and I'm wondoring about data input of the FSD autonomous driving algorithm perception solution, based on RGB image data as input or Camera raw as data input?
Here is some matetials on the AI day.
On the 2021 AI day,it shows that Perception algo is based the camera raw data ,with the type "Bayer raw ,12bit ,no tonemapping". However ,On the 2022 AI day, it shows that
perception algo is based the rgb(png/jpg) data ,with the type "rgb ,8bit ,isp processed".
My questions about the data input as follows:
Q1:So the data input type description above is true ? what is the best data type for perception input.
Q2:Shadow mode collects data such as Video clips after isp process and encode process,so the video image is the input for training? if not ,with the bayer raw as input,how to reduce the distribution gap between original raw data and rgb data。
Hello,
I'm very interested about the github course. and I'm wondoring about data input of the FSD autonomous driving algorithm perception solution, based on RGB image data as input or Camera raw as data input?
Here is some matetials on the AI day. On the 2021 AI day,it shows that Perception algo is based the camera raw data ,with the type "Bayer raw ,12bit ,no tonemapping". However ,On the 2022 AI day, it shows that perception algo is based the rgb(png/jpg) data ,with the type "rgb ,8bit ,isp processed".
My questions about the data input as follows: Q1:So the data input type description above is true ? what is the best data type for perception input. Q2:Shadow mode collects data such as Video clips after isp process and encode process,so the video image is the input for training? if not ,with the bayer raw as input,how to reduce the distribution gap between original raw data and rgb data。
Thank you very much.