Open mickeyouyou opened 3 years ago
BTW: starting with PCL 1.9.0, you get a nice error message instead of just a coredump when trying to read a corrupted file.
Are you absolutely sure that you use PCL 1.9 to write the files? I found this pull request: https://github.com/PointCloudLibrary/pcl/pull/2325 The description there (and in the linked issue) sounds just like what you experience. If possible, please also try if the problem also occurs with PCL 1.12.0.
- Do you only use Ubuntu (both for generating and reading the files)?
- Is it possible that there was not enough disk space while writing the PCD files? Or could there be some other hardware limitation like too low write speed?
- I noticed that the bad file contains (supposedly) more points than the good file (317794 vs 317253). Judging from your other good and bad files, is it a pattern that the bad files contain more points than the good files?
- Can you reproduce the bad files in some way, or is it completely random whether a written file is bad or good?
BTW: starting with PCL 1.9.0, you get a nice error message instead of just a coredump when trying to read a corrupted file.
/mnt/s3
.pcl::io::savePCDFileBinaryCompressed
50 times every one frame, that will produce 3000(frames)*50 = 15W pcd files but I don't find some bad files like in attachments. codes like
for(int i =0; i< 50;i++) {
std::string fileName =
pcd_dir + "/" + std::to_string(timestamp) + "_" + std::to_string(i) + ".pcd";
pcl::io::savePCDFileBinaryCompressed(fileName, *cloud_out);
std::cout << "save frame: " << frameItem << " to pcd: " << fileName
<< std::endl;
}
I checked the code that writes the binary compressed PCD files, but it looks okay. For every IO operation, there is a check whether the operation succeeded, and it prints an error or throws an exception if it doesn't succeed, but if I understood you correctly, there are no errors while writing the PCD files, right? I noticed that the bad/corrupted PCD file is exactly 2097152 bytes large, which is 2^21. My best guess is that something goes wrong with the AWS object storage, that the first chunk(s) of the file get stored correctly, but the chunks after that somehow go missing. So my suggestion is that you try writing the files to "normal" disk space instead of the AWS object storage, and see whether there are still bad/corrupted files.
Acctually object storage is prallell operation, we generate it in local file system(docker container) firstly and move it to storage one time way. So I don't think it is an problem of object storage.
Can you verify then whether there are already bad/corrupted files directly after writing them to the local file system, or whether they only appear after moving them to the external storage? If you are not directly writing to AWS object storage as you previously said, are you still absolutely sure that there is enough free space where the files are written? You can also try to save your files as binary files (uncompressed) instead and check if there are any bad/corrupted files (e.g. files much smaller than the others).
thank you so much for you repply!
pcl::io::savePCDFileBinaryCompressed
50 times for every frame and save every pcd file to local file system, that will produce 3000(frames)*50 = 15W pcd files, but I don't find any bad files.
Describe the bug We use
pcl::io::savePCDFileBinaryCompressed(label_pcd_Name, *cloud_out);
to save our pcd file, and in most situation ,it works well. But in same output file(2 pcd files in 100k), it will meet coredump error likeLoading /mnt/data/Lidar_B3_1632536240.453701.pcd [1] 24581 bus error (core dumped)
, Even though we use different tools.Context
What are you trying to accomplish? Providing context helps us come up with a solution that is most useful in the real world
pcd file header
Although 317,794 point clouds are described in the header, the file size is actually only 2.2M, which is half smaller than the normal 4.5M.
pcl_viewer:
cloudcompare :
gdb meesage :
Expected behavior
A clear and concise description of what you expected to happen. read pcd and points ok
Current Behavior
What happens instead of the expected behavior? pcd file can't read with coredump
To Reproduce
Provide a link to a live example, or an unambiguous set of steps to reproduce this bug. A reproducible example helps to provide faster answers.
Screenshots/Code snippets
In order to help explain your problem, please consider adding
Your Environment (please complete the following information):
Possible Solution
Not obligatory, but suggest a fix/reason for the bug. Feel free to create a PR if you feel comfortable.
Additional context
Add any other context about the problem here.
origin pcd file is here Lidar_B3_1632536240.453701.zip
normal_and_coredump_2pcd.zip