Open kevindean87 opened 5 years ago
Sorry for a wayyy late response, but I figured out how to do this.
-KED
@kevindean87 Hi, is it possible to use PointNet for higher dimensional vectors such as 500-D and perform training? I need to process higher dimensional points.
Hi, @kevindean87 could you please tell me how to fit Lidar data
into testing ?
if the data has 'XYZ' information as a txt
format.
Thank you !
I believe PointCNN might be more suitable for what you are looking for. I believe PointCNN is built off of pointnet
-KED
On Sat, May 18, 2019 at 5:07 AM Chiranjibi Sitaula notifications@github.com wrote:
@kevindean87 https://github.com/kevindean87 Hi, is it possible to use PointNet for higher dimensional vectors such as 500-D and perform training? I need to process higher dimensional points.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/charlesq34/pointnet/issues/163?email_source=notifications&email_token=AEBPDAUTTO3FSV7CDGCYMOTPV7WQXA5CNFSM4G3GXOW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVWNOEI#issuecomment-493672209, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBPDAVT4TDH3TGZL337LWTPV7WQXANCNFSM4G3GXOWQ .
-- This email and its contents are confidential. If you are not the intended recipient, please do not disclose or use the information within this email or its attachments. If you have received this email in error, please report the error to the sender by return email and delete this communication from your records.
@kevindean87 Hi, is it possible to use PointNet for higher dimensional vectors such as 500-D and perform training? I need to process higher dimensional points.
@csitaula - the comment at the botom about PointCNN was for you (sorry about that). From what I have discussed with friends / colleagues, I would believe that PointCNN could possibly do what you require.
@iris0329 - which directory are you sifting through? (simple classification, classification and part_seg, or classification and sem_seg)
@kevindean87 yes, I want to use sem_seg part to test on my own indoor data. and my own cloud data was collected by Lidar
.
The raw data has XYZRGB
, and:
collect_indoor3d_data.py
to label info (although this label information is useless for testing phase)gen_indoor3d_h5.py
to add normalized X' Y' Z'But the testing result is bad.
you can see this data has much noisy.
Thank you !
Hi @iris0329 for semantic segmentation what should be the folder format for indoor3d as in s3Id data under area_1 there was Annotation and seperate area.txt and under annotation diffrenet files of objects. Is Annotation folder is necessary.
Sorry for a wayyy late response, but I figured out how to do this.
-KED
@kevindean87 Hi, could you provide a code snippet, how you implemented variable point number on training time?
Further, have you made some expiriences/insights about the PointNet performance for fixed and variable number of points during training. I suppose fixed size during training ends in faster training (keep the model small) due to less parameters, where variable point number commes closer to real use cases (when dealing with large scenes) and therfore better generalization?
Thx.
Hello @charlesq34.
I have been able to use the part segmentation algorithm you generated with my own LiDAR data and it works beautifully. After generating the model, I freeze the graph and optimize it. I later load it and try to use it. However, it is expecting a Tensor shape of (10, 2048, 3) or (batch_size, point_num, 3) for the pointclouds_ph. Obviously, I want a batch_size of 1, and I want to set point_num to however many points are in my data (variable from ~10,000 to ~30,000). When I run your test.py (by simply loading the checkpoint) works perfectly; but I need a frozen graph (for the way my company wants to implement Tensorflow). Is there a way to set up the training so that it is independent of specific batch_size and/or point_num? Thanks for any help!
Kevin Edward Dean