AllenXiangX / SnowflakeNet

(TPAMI 2023) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
MIT License
155 stars 17 forks source link

Training with special point cloud dataset #14

Closed ema2161 closed 1 year ago

ema2161 commented 2 years ago

Hi, I want to use myself training dataset, dose it possible? Could I set each of these point cloud dataset to a fixed size of 200000 points for-example, instead of 16348 points which you had set for pcn dataset? The best Ema

AllenXiangX commented 2 years ago

Hi, You can change the parameters (Nc, N0 and up_factors #, #) of the network according to your need.

ema2161 commented 2 years ago

Thanks, would you please tell me some comments about setting of other parameters that I have to change or prepare for a new dataset? for example about .json file, how can I set it to myself data? Best Regards Ema

ema2161 commented 2 years ago

Hi, I have two groups of ASCII files. each of them contains 2048 rows and 3 columns of coordinates as gt and partial files. I want to feed them to the snowflakenet by combining them with the training completion3D dateset. How can I change them into the .h5 files? Do I have to use special software? Appreciate!

ema2161 commented 2 years ago

Dear Allen Xiang Thanks to your reply, I have about 200 ascii files of gt and their corresponding partial point cloud dataset that are too few to train a deep learning model! So, I'm not sure how I can feed them into Snowflakenet. I would be very grateful if you could guide me how I can train or improve that model with my ascii files? Does it need to change the format of files to integrate with other existing input data? if yes what format they have to be and what dataset do you suggest? or if I just need to tune the pre obtained model weights with my data how can I do it? The kind Regards Ema

On Sat, Dec 25, 2021 at 5:53 AM AllenXiang @.***> wrote:

Hi, You can change the parameters (Nc, N0 and up_factors # https://github.com/AllenXiangX/SnowflakeNet/blob/93e7151610765e7e2b41ace2d03c8750f0b6c80c/models/model.py#L158, # https://github.com/AllenXiangX/SnowflakeNet/blob/93e7151610765e7e2b41ace2d03c8750f0b6c80c/core/train_pcn.py#L55) of the network according to your need.

— Reply to this email directly, view it on GitHub https://github.com/AllenXiangX/SnowflakeNet/issues/14#issuecomment-1000960554, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASKIOLGNMLODPAUOHCJMVMLUSUTILANCNFSM5KWOR4NA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

AllenXiangX commented 2 years ago

Hi, Sorry for the late reply. I think the easiest way is to tune the pre-trained model on your dataset. And it's not necessary to change the file format as long as you can read the data in the ASCII files into python numpy arrays. You may need to write your own dataloader to feed your data to SnowflakeNet, Best regards Allen