tyagi-iiitv / PointPillars

GNU General Public License v3.0
105 stars 47 forks source link

what is the reason for using caliberation files here ,we only want object cordinate and yaw ?please help :(( #34

Open Manueljohnson063 opened 3 years ago

mariya12290 commented 3 years ago

Hey @Manueljohnson063

Kitti data is in camera co-ordinate system. Since we need prediction of BBoxes in Lidar coordinate system, network needs ground truth labels in lidar co-ordinate system.

Please close the issue, if the above answer was helpful to you.

tyagi-iiitv commented 3 years ago

Hi @mariya12290 Can you please be a part of this repo's contributors? Sorry, I have been busy with other projects, maybe you can help in maintaining this repo.

mariya12290 commented 3 years ago

Hey @tyagi-iiitv Thank you for considering me to be a part of your work. Currently, I am writing my master thesis, so I need to spend lots of time on my thesis. But I can assure you, I will try to answer the issue as much as possible in my free time.

Please do not mind.

Manueljohnson063 commented 3 years ago

@mariya12290 Thanks for your reply Sir, Currently i am doing to detect objects from point cloud alone ,so i commented the conversion of x,y,z (cam to lidar) and i directly input the data(i use a 3d lidar labler).But i didnt get any output.Could you please gave me any suggestion thanks in advance.

mariya12290 commented 3 years ago

Hey @Manueljohnson063 Please check the point cloud through visualization once the pre processing is done and try to visualize the bboxes as well, which might say something about the ground truth and the data of the network.

Sometimes bboxes will be wrong, when we feed into the network, sometimes, point could will be wrong. I can not say exactly what will be the problem, but I can say what can be done to find the error.

hope this helps you.

Manueljohnson063 commented 3 years ago

Thanks for your reply Sir, Could you please gave me a hint what all challenges might occur when dealing with point cloud !!!!!

mariya12290 commented 3 years ago

hey @Manueljohnson063 give me some time, I share my repo with you. or I will push a commit to this repo only.

Manueljohnson063 commented 3 years ago

On Sat, 1 May 2021 at 7:12 PM, Surendra Kumar @.***> wrote:

hey @Manueljohnson063 https://github.com/Manueljohnson063 give me some time, I share my repo with you. or I will push a commit to this repo only.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tyagi-iiitv/PointPillars/issues/34#issuecomment-830635184, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKVEEXWXUVHNIKX6JZU6ZX3TLQAL5ANCNFSM4YB3OLZA .

Hai did you done it with your own point cloud

Manueljohnson063 commented 3 years ago

@mariya12290 Hai , Sir I trained the setup with my own lidar data set(I collected the point cloud from 32 layer lidar).I am not getting good result. I am getting bbox every where , could you please help me out sir.......

mariya12290 commented 3 years ago

@Manueljohnson063 are you sure, your network is learning correctly?. One thing to check first data and ground truth,second debugging the model from the beginning until the predictions(if the network is doing some prediction means, most probably network is correct).

In your case I assume, network is learning garbage, did you check whether all the predicted boxes have the same size for a specific class? if not why? play with hyper parameters, see there is any changes in different hyper parameters.

check the coordinate system(ground truth) as well. whether it is in camera or lidar while feeding into the network.

It might be possible that network implementation will be wrong, it might not be according to original paper or pytorch model.

I hope that you will find the solution.

Manueljohnson063 commented 3 years ago

Hi, Thanks for your reply 1.I first convert all my data in to kitti data set format 2.Then i feed the data to the network Sir could you please explain "It might be possible that network implementation will be wrong, it might not be according to the original paper or pytorch model " little more ? One more question: have you done the same with the point cloud other than kitty? Thanks in advance,I have been struggling for the past one month....

On Tue, May 18, 2021 at 10:58 PM Surendra Kumar @.***> wrote:

@Manueljohnson063 https://github.com/Manueljohnson063 are you sure, your network is learning correctly?. One thing to check first data and ground truth,second debugging the model from the beginning until the predictions(if the network is doing some prediction means, most probably network is correct).

In your case I assume, network is learning garbage, did you check whether all the predicted boxes have the same size for a specific class? if not why? play with hyper parameters, see there is any changes in different hyper parameters.

check the coordinate system(ground truth) as well. whether it is in camera or lidar while feeding into the network.

It might be possible that network implementation will be wrong, it might not be according to original paper or pytorch model.

I hope that you will find the solution.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tyagi-iiitv/PointPillars/issues/34#issuecomment-843382246, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKVEEXVEHTVNIMDKVLOAF4LTOKPVXANCNFSM4YB3OLZA .