ApolloAuto / apollo

An open autonomous driving platform
Apache License 2.0
25.05k stars 9.68k forks source link

How to train a new model with self data and use pointpillar in other situation ? #12623

Open Shaoqing26 opened 4 years ago

Shaoqing26 commented 4 years ago

We appreciate your excellent work in apollo,I found that PointPillar has integrated in apollo 6.0,and I wonder know how to train our lidar data and use two stage onnx,pfe and rpn to detection,Which version of Pointpillar used to produce the onnx,I found there is different stucture with origin work.Can u provide the python version to train u used for us? so wo can use it to train our data.THANKS

Conclude: 1) How to train our data and use the structure in apollo 6.0 to run the detection? 2) Can u provide the python version of trainning data,to reduce the result of two stage onnx? 3) How many lines lidar the apollo used to trainning model?

I'm very appeciate the work you have done.

Looking forword you relply.

System information

Linux Ubuntu 18.04) Apollo installed from source Apollo version 6.0

jeroldchen commented 4 years ago

@Shaoqing26 Yes, we provide an online service for training PointPillars model using your own data. For the details, please check https://github.com/ApolloAuto/apollo/blob/master/docs/Apollo_Fuel/Perception_Lidar_Model_Training/README.md .

Shaoqing26 commented 4 years ago

@Shaoqing26 Yes, we provide an online service for training PointPillars model using your own data. For the details, please check https://github.com/ApolloAuto/apollo/blob/master/docs/Apollo_Fuel/Perception_Lidar_Model_Training/README.md .

Can u prove offlin python version for tranning? and I have no camera data fusion the lidar How can i training with only lidar data? .

panfengsu commented 4 years ago

@Shaoqing26 now we only provide cloud service of training PointPillars model.We will support more training data format later.Can you tell your own data format, we will support it as far as possible.Thank you for supporting Apollo!

Shaoqing26 commented 4 years ago

@Shaoqing26 now we only provide cloud service of training PointPillars model.We will support more training data format later.Can you tell your own data format, we will support it as far as possible.Thank you for supporting Apollo!

Fine,actually,I dont care the format of the data,because i can convert my lidar data to kitti format to traning,but i have only lidar data.,i dont have camera information,Pointpillar is a point based Network,Why dont you separate it to have only lidar formation.

Others,i want to use the network in apollo with 16 chanel or 32 chanel and other chanels but not 128,so i have to train my our model,but apollo only support online trainning with BOS and apollo fuel platform, Neither of them is free.

Since apollo is open source, why can't provide an offline training version for us, let us use it freely.

panfengsu commented 4 years ago

@Shaoqing26 we have see your need,autually apollo-fuel is a Saas service,not needing user to build the environment including software and GPU.So it is more convenient to train models.We will seriously consider your suggestion.

Shaoqing26 commented 4 years ago

@Shaoqing26 we have see your need,autually apollo-fuel is a Saas service,not needing user to build the environment including software and GPU.So it is more convenient to train models.We will seriously consider your suggestion.

OK.Thanks. One more question,Do i have to need both of lidar and camera information to training model in apollo-fuel,in other words,How can i get nomal model with only lidar data ?

panfengsu commented 4 years ago

@Shaoqing26 autually,camera information is not needed,we will optimization it quickly.But I need to know how you label your lidar data,and it is in camera coordinates or lidar coordinates.We are happy that you put forward your specific needs to help us optimize this service,and we seriously consider your suggestion to provide an offine training version too.

Shaoqing26 commented 4 years ago

@Shaoqing26 autually,camera information is not needed,we will optimization it quickly.But I need to know how you label your lidar data,and it is in camera coordinates or lidar coordinates.We are happy that you put forward your specific needs to help us optimize this service,and we seriously consider your suggestion to provide an offine training version too.

Thanks. I know that pointpillar is a pure pointcloud network. Most of pointcloud Detection NetWork were use kitii data to tranning and pub papers,and kitti data has both camera and lidar data, it Project each other in the network.\ We can just label object in pointcloud with: location formation:center of object x,y,z,height,width,lenth or label 8 points of cube. rotation information:pitch.row.roll and others like type /truncate can just like kitti.

I think location information like 8 points of cude is enough. We can calculate other informations. JUST dont use camera data to project to each other in network.

panfengsu commented 4 years ago

@Shaoqing26 you are right,but you must convert your location information and rotation information to lidar coordinate.kitti annotation information is in camera coordinates,so it need camera image and calibrate information.

Shaoqing26 commented 4 years ago

@Shaoqing26 you are right,but you must convert your location information and rotation information to lidar coordinate.kitti annotation information is in camera coordinates,so it need camera image and calibrate information.

Yes i know that,the kitti information is in camera coordinata and transform in network, but i dont want anny camera information in network. Because Pointpillar itself does not need picture information, just take the pure pointcloud into the net,and get the result.

Proposal: 1) Provide offline tranning version for apollo funs 2)separation the pure lidar data and camera data,take the pure pointcloud into network.

I think thers has lots of apollo-funs has same confuse like me.

Wish apollo get better and better.