ApolloAuto / apollo

An open autonomous driving platform
Apache License 2.0
25.2k stars 9.71k forks source link

APOLLO8.0 BEV Camera Detection #14742

Closed yyqgood closed 1 year ago

yyqgood commented 1 year ago

Regarding apollo8.0 bev camera detection, I have 2 questions:

  1. Can we provide: Operation documents for the above functions?
  2. Can we provide: relevant offline packages for testing and verification?

Looking forward to your reply, thank you!

AlexandrZabolotny commented 1 year ago

Hi everyone! I have a similar question. I was trying to run apollo8.0 bev camera detection and get error

terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what(): (NotFound) Cannot open file /apollo/modules/perception/production/data/perception/camera/models/petr_v1/petr_inference.pdmodel, please confirm whether the file is normal. [Hint: Expected static_cast(fin.is_open()) == true, but received static_cast(fin.is_open()):0 != true:1.] (at /apollo/data/Paddle/paddle/fluid/inference/api/analysis_predictor.cc:1901)

Aborted (core dumped)

3. Where I can get .pdmodel?

daohu527 commented 1 year ago

@AlexandrZabolotny You can download the model here. https://github.com/PaddlePaddle/Paddle3D/tree/release/1.0/docs/models/petr

Because model files are usually very large, we forgot to mount petr_v1. In the future we will use the amodel tool to manage models, for example, you can install and view which models are in the Apollo system.

The more info you can find here. amodel

daohu527 commented 1 year ago

Can we provide: Operation documents for the above functions?

Yes, we have a similar document describing the whole process from model training to running the model in Apollo. Here are the details how_to_train_and_deploy_model_to_apollo

You can also view 8.0 documents in the Apollo community

Can we provide: relevant offline packages for testing and verification?

Yes, we also note the need to maintain data consistency between training and validation. So currently we support dataset conversion to Apollo record. The adataset tool currently supports converting nuScenes and KITTI datasets to Apollo record. And this is how we validate the BEV model.

Currently bev only supports the nusense dataset. Here are the details about adataset

yyqgood commented 1 year ago

@daohu527 thank you for your reply!

Regarding the 2nd question:

I mean: Do we have packets like: demo_3.5.record sensor_rgb.record; for verifying BEV PipeLine.

BEV record data package should include the following topics: /apollo/sensor/camera/CAM_FRONT/image, /apollo/sensor/camera/CAM_FRONT_LEFT/image, /apollo/sensor/camera/CAM_FRONT_RIGHT/image, /apollo/sensor/camera/ CAM_BACK_LEFT/image,/apollo/sensor/camera/CAM_BACK_RIGHT/image,/apollo/sensor/camera/CAM_BACK/image" and so on.

Looking forward to your reply, thank you!

daohu527 commented 1 year ago

@yyqgood Yes, you can choose to generate the record file yourself, or download the prepared one.

We will update the link soon.

AlexandrZabolotny commented 1 year ago

@yyqgood Yes, you can choose to generate the record file yourself, or download the prepared one.

@daohu527 thanks for your recommendations I downloaded the model and now my perception module starts without errors. But the algorithm does not detect any object. Perhaps someone has advice how to find the cause and what are the ways to debug? @yyqgood I use LGSVLsimulator. I saved the record file for you. Do not forget run command for decompres _cyber_launch start modules/drivers/tools/image_decompress/launch/imagedecompress.launch Also run statick transform!!!

AlexandrZabolotny commented 1 year ago

for transform

{ "type": "Color Camera", "name": "CAM_FRONT", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_FRONT/image/compressed" }, "transform": { "x": 0, "y": 1.6, "z": 1.2, "pitch": 0, "yaw": 0, "roll": 0 } }, { "type": "Color Camera", "name": "CAM_FRONT_LEFT", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_FRONT_LEFT/image/compressed" }, "transform": { "x": -0.5, "y": 1.6, "z": 1.2, "pitch": 0, "yaw": -56, "roll": 0 } }, { "type": "Color Camera", "name": "CAM_FRONT_RIGHT", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_FRONT_RIGHT/image/compressed" }, "transform": { "x": 0.5, "y": 1.6, "z": 1.2, "pitch": 0, "yaw": 56, "roll": 0 } }, { "type": "Color Camera", "name": "CAM_BACK", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_BACK/image/compressed" }, "transform": { "x": 0, "y": 1.6, "z": -1.2, "pitch": 0, "yaw": 180, "roll": 0 } }, { "type": "Color Camera", "name": "CAM_BACK_LEFT", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_BACK_LEFT/image/compressed" }, "transform": { "x": -0.5, "y": 1.6, "z": -1.2, "pitch": 0, "yaw": -108, "roll": 0 } }, { "type": "Color Camera", "name": "CAM_BACK_RIGHT", "params": { "Width": 1920, "Height": 1080, "Frequency": 15, "JpegQuality": 75, "FieldOfView": 50, "MinDistance": 0.1, "MaxDistance": 2000, "Topic": "/apollo/sensor/camera/CAM_BACK_RIGHT/image/compressed" }, "transform": { "x": 0.5, "y": 1.6, "z": -1.2, "pitch": 0, "yaw": 111, "roll": 0 } },

daohu527 commented 1 year ago

@AlexandrZabolotny

We are fixing the problem of sensor meta and will add the model deploy and record file in readme tomorrow. As you said, the internal parameters of the camera may have a great influence, which is why we use the data set to make the test record file.

At the same time, bev has relatively high requirements for computing power, otherwise the detection frame rate will not be too high.

yyqgood commented 1 year ago

@AlexandrZabolotny Thanks for your support!

@daohu527 Looking forward to tomorrow's update too!

daohu527 commented 1 year ago

You can download the bev_test_record here, which is 1.9G.

The official document update may be delayed until after the Spring Festival

yyqgood commented 1 year ago

@daohu527

  1. Thanks for the packet, we will try it out
  2. Also look forward to the official documentation that you will release after the Spring Festival

Thanks again!

AlexandrZabolotny commented 1 year ago

@AlexandrZabolotny You can download the model here. https://github.com/PaddlePaddle/Paddle3D/tree/release/1.0/docs/models/petr

Because model files are usually very large, we forgot to mount petr_v1. In the future we will use the amodel tool to manage models, for example, you can install and view which models are in the Apollo system.

The more info you can find here. amodel

Hi @daohu527. Thank you for you helping

I have a couple of questions about the Apollo team's plans. Apollo v8 supports petr_v1 model. But the paddlpaddl team has already released version v2, which has the ability to detect road lines marker and segmentation semantics.

  1. Does the team have plans to use the petr_v2 model?
  2. Does the team have plans to implement localization using cameras similar to those of Tesla?
daohu527 commented 1 year ago

Does the team have plans to use the petr_v2 model?

No, currently we may optimize bev detection first. You know that there is a big difference between bev and apollo's previous solution, it needs to be refactored in many aspects, so we temporarily decided to optimize the existing functions first

Does the team have plans to implement localization using cameras similar to those of Tesla?

Visual positioning is currently better applied in specific scenarios, and we may try to port more slam algorithms to apollo in the future.

To be honest, bev is an organic whole, it is not compatible with the L4 solution, so we will still optimize based on existing functions, of course we are always ready to innovate and add new algorithms

AlexandrZabolotny commented 1 year ago

Hello @daohu527 Thank You a lot for providing the record file and you suport. I have a couple of questions yet.

  1. What are the hardware requirements? I am using Intel Corei7 9th Gen and my graphics card is Geforce RTX2070. I see that the frequency of the input camera channels are 12 Hz. As a result, we get an obstacle channel with a frequency of only 3 Hz.

  2. Is there a way to programmatically increase the frequency of the output channel?

Screenshot from 2023-01-26 10-14-23

daohu527 commented 1 year ago

What are the hardware requirements?

Yes, we tested on 3070 card and it output frequency is around 3hz. The computing power requirement of bev is higher than that of the detection model, and we have not yet tested cards with higher computing power

Is there a way to programmatically increase the frequency of the output channel?

There are two ways, one is to optimize the model, such as using int8, and the other is to increase the hardware computing power. We may optimize it based on nvidia orin in the future

lucianzhong commented 1 year ago

Can not run bev perception

using: mainboard -d /apollo/modules/perception/production/dag/dag_streaming_bev_camera.dag

And the errors: error 1: [transform_wrapper.cc:276] [mainboard]Can not find transform. 1532402927.612459898 frame_id: novatel child_frame_id: CAM_FRON│ child_frame_id: "localization"

error 2: Error info: canTransform: target_frame novatel does not exist. canTransform: source_frame CAM_FRONT does not exist.:timeout

error 3:
[camera_bev_detection_component.cc:376] [perception]failed to get camera to world pose, ts: 1.5324e+09 camera_name: CAM_FRONT

error 4: terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' InvalidArgumentError: The squeeze2 Op's Input Variable X contains uninitialized Tensor

By the way, if using mainboard -d .dag has successfully launched bev_perception, then cyber_recorder play -f bev_test.record, that's the way to run bev_perception demo. Correct me if I missing some steps

AlexandrZabolotny commented 1 year ago

What are the hardware requirements?

Yes, we tested on 3070 card and it output frequency is around 3hz. The computing power requirement of bev is higher than that of the detection model, and we have not yet tested cards with higher computing power

Is there a way to programmatically increase the frequency of the output channel?

There are two ways, one is to optimize the model, such as using int8, and the other is to increase the hardware computing power. We may optimize it based on nvidia orin in the future

Did you run static_transform and localozation modules?

lucianzhong commented 1 year ago

What are the hardware requirements?

Yes, we tested on 3070 card and it output frequency is around 3hz. The computing power requirement of bev is higher than that of the detection model, and we have not yet tested cards with higher computing power

Is there a way to programmatically increase the frequency of the output channel?

There are two ways, one is to optimize the model, such as using int8, and the other is to increase the hardware computing power. We may optimize it based on nvidia orin in the future

Did you run static_transform and localozation modules?

static_transform yes, no localization module. errors: InvalidArgumentError: The squeeze2 Op's Input Variable X contains uninitialized Tensor terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'

lucianzhong commented 1 year ago

By the way, have you guys successfully run the petr_bev perception in apoll0 8.0? how is the precision of obstacles' position, dimension ,etc @AlexandrZabolotny @yyqgood @daohu527

AlexandrZabolotny commented 1 year ago

how is the precision of obstacles' position, dimension ,etc

Yes, I have. Obstacles' position, dimension ,etc are accurate enough. But the frequency is slow

lucianzhong commented 1 year ago

how is the precision of obstacles' position, dimension ,etc

Yes, I have. Obstacles' position, dimension ,etc are accurate enough. But the frequency is slow

I download apollo 8.0 version, and then dev_start, dev_into, build_gpu

cyber_recorder play -f bev_test.record --loop

cyber_launch start modules/transform/launch/static_transform.launch

mainboard -d /apollo/modules/perception/production/dag/dag_streaming_bev_camera.dag

But I get error, could you please tell me your step? thanks alot

daohu527 commented 1 year ago

Perception sensor configuration is bound to the vehicle model, you need to select the configuration before starting. Apollo use dreamview to select the vehicle model.

So we should select the vehicle model in dreamview first.

lucianzhong commented 1 year ago

Perception sensor configuration is bound to the vehicle model, you need to select the configuration before starting. Apollo use dreamview to select the vehicle model.

So we should select the vehicle model in dreamview first.

Thanks a lot. Now I can run bev_perception.

LeoLai0926 commented 2 months ago

Perception sensor configuration is bound to the vehicle model, you need to select the configuration before starting. Apollo use dreamview to select the vehicle model.

So we should select the vehicle model in dreamview first.

Thanks for your team's great work first, I have some config file issues when running camera_detection_bev component.

after launch the component, the console reports I dont have CAM_BACK in data/conf/sensor_meta.pb.txt

[camera_detection_bev]  WARNING: Logging before InitGoogleLogging() is written to STDERR
[camera_detection_bev]  I1122 09:46:50.160238 121716 module_argument.cc:89] []command: mainboard -d /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag -p camera_detection_bev -s CYBER_DEFAULT
[camera_detection_bev]  I1122 09:46:50.160926 121716 global_data.cc:153] []host ip: 172.168.1.100
[camera_detection_bev]  I1122 09:46:50.164680 121716 module_argument.cc:62] []binary_name_ is mainboard, process_group_ is camera_detection_bev, has 1 dag conf
[camera_detection_bev]  I1122 09:46:50.164710 121716 module_argument.cc:65] []dag_conf: /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag
[camera_detection_bev]  F1122 09:46:57.244764 121716 camera_detection_bev_component.cc:44] Check failed: model [perception]Can't find CAM_BACK in data/conf/sensor_meta.pb.txt
[camera_detection_bev]  *** Check failure stack trace: ***
[camera_detection_bev]      @     0xffffb041ea38  google::LogMessage::Fail()
[camera_detection_bev]      @     0xffffb0420d24  google::LogMessage::SendToLog()
[camera_detection_bev]      @     0xffffb041e554  google::LogMessage::Flush()
[camera_detection_bev]      @     0xffffb0421184  google::LogMessageFatal::~LogMessageFatal()
[camera_detection_bev]      @     0xffffac56d674  apollo::perception::camera::CameraDetectionBevComponent::InitDetector()
[camera_detection_bev]      @     0xffffac56ddc4  apollo::perception::camera::CameraDetectionBevComponent::Init()
[camera_detection_bev]      @     0xaaaae7b4da2c  apollo::cyber::Component<>::Initialize()
[camera_detection_bev]      @     0xaaaae7b517e4  apollo::cyber::mainboard::ModuleController::LoadModule()
[camera_detection_bev]      @     0xaaaae7b51ba4  apollo::cyber::mainboard::ModuleController::LoadModule()
[camera_detection_bev]      @     0xaaaae7b51264  apollo::cyber::mainboard::ModuleController::LoadAll()
[camera_detection_bev]      @     0xaaaae7b4ddd4  apollo::cyber::mainboard::ModuleController::Init()
[camera_detection_bev]      @     0xaaaae7b4c8b0  main
[camera_detection_bev]      @     0xffffaffb2e10  __libc_start_main
[camera_detection_bev]      @     0xaaaae7b4c774  (unknown)
[cyber_launch_121709] ERROR Process [camera_detection_bev] has died [pid 121716, exit code -6, cmd mainboard -d /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag -p camera_detection_bev -s CYBER_DEFAULT].
[cyber_launch_121709] INFO All processes has died.
[cyber_launch_121709] INFO Cyber exit.
[cyber_launch_121709] INFO All processes have been stopped.
[nvidia@in-dev-docker:/apollo]$

and my sensor_meta.pb.txt looks like

sensor_meta {
    name: "velodyne64"
    type: VELODYNE_64
    orientation: PANORAMIC
    is_main_sensor: true
}

sensor_meta {
    name: "front_6mm"
    type: MONOCULAR_CAMERA
    orientation: FRONT
}

sensor_meta {
    name: "front_12mm"
    type: MONOCULAR_CAMERA
    orientation: FRONT
}

sensor_meta {
    name: "radar_front"
    type: LONG_RANGE_RADAR
    orientation: FRONT
}

so how should I set the CAM_BACK in the sensor_meta.pb.txt file.

btw, I am running with the official model(petrv1) and the test data you provided above(bev_test.record)

LeoLai0926 commented 2 months ago

Perception sensor configuration is bound to the vehicle model, you need to select the configuration before starting. Apollo use dreamview to select the vehicle model. So we should select the vehicle model in dreamview first.

Thanks for your team's great work first, I have some config file issues when running camera_detection_bev component.

after launch the component, the console reports I dont have CAM_BACK in data/conf/sensor_meta.pb.txt

[camera_detection_bev]  WARNING: Logging before InitGoogleLogging() is written to STDERR
[camera_detection_bev]  I1122 09:46:50.160238 121716 module_argument.cc:89] []command: mainboard -d /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag -p camera_detection_bev -s CYBER_DEFAULT
[camera_detection_bev]  I1122 09:46:50.160926 121716 global_data.cc:153] []host ip: 172.168.1.100
[camera_detection_bev]  I1122 09:46:50.164680 121716 module_argument.cc:62] []binary_name_ is mainboard, process_group_ is camera_detection_bev, has 1 dag conf
[camera_detection_bev]  I1122 09:46:50.164710 121716 module_argument.cc:65] []dag_conf: /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag
[camera_detection_bev]  F1122 09:46:57.244764 121716 camera_detection_bev_component.cc:44] Check failed: model [perception]Can't find CAM_BACK in data/conf/sensor_meta.pb.txt
[camera_detection_bev]  *** Check failure stack trace: ***
[camera_detection_bev]      @     0xffffb041ea38  google::LogMessage::Fail()
[camera_detection_bev]      @     0xffffb0420d24  google::LogMessage::SendToLog()
[camera_detection_bev]      @     0xffffb041e554  google::LogMessage::Flush()
[camera_detection_bev]      @     0xffffb0421184  google::LogMessageFatal::~LogMessageFatal()
[camera_detection_bev]      @     0xffffac56d674  apollo::perception::camera::CameraDetectionBevComponent::InitDetector()
[camera_detection_bev]      @     0xffffac56ddc4  apollo::perception::camera::CameraDetectionBevComponent::Init()
[camera_detection_bev]      @     0xaaaae7b4da2c  apollo::cyber::Component<>::Initialize()
[camera_detection_bev]      @     0xaaaae7b517e4  apollo::cyber::mainboard::ModuleController::LoadModule()
[camera_detection_bev]      @     0xaaaae7b51ba4  apollo::cyber::mainboard::ModuleController::LoadModule()
[camera_detection_bev]      @     0xaaaae7b51264  apollo::cyber::mainboard::ModuleController::LoadAll()
[camera_detection_bev]      @     0xaaaae7b4ddd4  apollo::cyber::mainboard::ModuleController::Init()
[camera_detection_bev]      @     0xaaaae7b4c8b0  main
[camera_detection_bev]      @     0xffffaffb2e10  __libc_start_main
[camera_detection_bev]      @     0xaaaae7b4c774  (unknown)
[cyber_launch_121709] ERROR Process [camera_detection_bev] has died [pid 121716, exit code -6, cmd mainboard -d /apollo/modules/perception/camera_detection_bev/dag/camera_detection_bev.dag -p camera_detection_bev -s CYBER_DEFAULT].
[cyber_launch_121709] INFO All processes has died.
[cyber_launch_121709] INFO Cyber exit.
[cyber_launch_121709] INFO All processes have been stopped.
[nvidia@in-dev-docker:/apollo]$

and my sensor_meta.pb.txt looks like

sensor_meta {
    name: "velodyne64"
    type: VELODYNE_64
    orientation: PANORAMIC
    is_main_sensor: true
}

sensor_meta {
    name: "front_6mm"
    type: MONOCULAR_CAMERA
    orientation: FRONT
}

sensor_meta {
    name: "front_12mm"
    type: MONOCULAR_CAMERA
    orientation: FRONT
}

sensor_meta {
    name: "radar_front"
    type: LONG_RANGE_RADAR
    orientation: FRONT
}

so how should I set the CAM_BACK in the sensor_meta.pb.txt file.

btw, I am running with the official model(petrv1) and the test data you provided above(bev_test.record)

ok my bad. the developers have mentioned this before

https://github.com/ApolloAuto/apollo/issues/14742#issuecomment-1511227803

just select the Nuscense165 in ADS Resources panel in the Dreamview, the config files will update automatically.