CloudCompare / CloudComPy

Python wrapper for CloudCompare
Other
286 stars 40 forks source link

point cloud saving problems #100

Open than2 opened 1 year ago

than2 commented 1 year ago

Hi paul,

we just got the data scanned with RTC360, and we want to save it from e57 to pcd files.

However, there are some problems there.

when we using cc.SavePointCloud(pc_raw,new_fn), the jupyter notebook kernel just died (pc_raw is the e57 file and new_fn is renamed pcd file).

so I try to use cloud compare to manually convert data format, unfortunately, still not work, the cloudcompare just break.

The tricky thing is that when we use e57 file from BLK360, all things goes well.

For sure, the size of RTC360 file is more than 2 times larger than BLK360 for one scan. also, the RAM of our computer is 64GB.

The reason might be scanner type or something? are there any way to solve this?

best, Tao

prascle commented 1 year ago

Hi Tao, I don't know the exact formats of the files generated by BLK360 and RTC360, and there may be an issue here. Did you post an issue on CloudCompare? The CloudCompare team probably has better knowledge of these formats. When you try the conversion with CloudCompare, if there is a problem with memory allocation, you should get a log message, not an abort. Otherwise, if you can post an rtc360 .e57 file somewhere, I can try to debug... Paul

than2 commented 1 year ago

Hi paul,

Thanks for the explaination. I could also post it in cloudcompare.

Additionally, just let you know that I find why that error happens, the reason is the size of each scan is too large (around 2.2 GB), so whether using cloudcompare, and cloudcompy, it just abort or has died kernel.

Then, I try to downsample the point cloud size with 0.05 m, then cloudcompare and cloudcompy save to pcd file are successful.

That's why I guess the size casuse the break of cloudcompare and cloudcompy.

Downsample is good, but with a large parameter, detailed information can be lost. Also, I think the memory of our computer is enough (64GB).

Might cc.SavePointCloud has some problems when saving large size data from e57 to pcd file?

I could send you one scan if you want, though one scan is 2.2 GB.

best, Tao

dgirardeau commented 1 year ago

Well, PCD files are saved by the PCL library itself. So either it's an issue with the conversion from the CloudCompare to the PCL cloud structure that fails, or it's the saving itself...

Can you monitor the memory consumption during the saving attempt and try to detect if it reaches a peak before crashing for instance?

than2 commented 1 year ago

thanks, daniel, I guesss the issue is the conversion from .e57 file to PCL cloud structure, size too large, so it crashes?

Also, please see the attached memory monitoring video, and you can see instance decrease in memory, I used my laptop, so the memory is 32 GB.

https://user-images.githubusercontent.com/48503256/232064987-aabe4bca-a9f0-4aa3-8ddc-176b4b18b50a.mp4

dgirardeau commented 1 year ago

Do you mean that the memory drops from 20Gb to 9Gb after the "crash"?

When you use the standard version of CC, what do you see? An error? Or the application exits?

than2 commented 1 year ago

yes, I mean that the memory drops from 20Gbs to 9Gbs after the "crash".

when I used the standard version of CC, there is no problem with function cc.loadPointCloud. However, when I used function cc.SavePointCloud, the kernel just died, no error shows.

error

dgirardeau commented 1 year ago

So it's really an issue with CloudCompy and not CloudCompare? Is it because of the python environment? Does it run with some limitations on the memory and/or its usage?

than2 commented 1 year ago

I guess both of them has this issue, or might be problem of pcl library? I mean, normally, if the data size if below 1Gb, the function works well, and feel weird why this error happens when having larger data size. Also, I set the virtual memory with 32 Gb in window linux subsystem.

just find this answer save to pcd crash, might be the reason?

dgirardeau commented 1 year ago

Ah sorry, I misread your message.

I was expecting you to test that on Windows 👼 .

And regarding the other issue, there have been quite some rework since 2019, so it's hard to tell.

I would need a sample file to try to reproduce that on my side...

prascle commented 1 year ago

Hello, If you send a sample file, I will check on my side! Or, maybe it's enough to generate a huge cloud with some scalar fields and try to save it as a pcd file... Paul

than2 commented 1 year ago

Thanks, guys, here is the sample data from our scanner, it is around 2.3 Gb.

demo.zip

best, Tao

dgirardeau commented 1 year ago

Thanks for the file. I've been able to track down the source of the crash. It's at this line: https://github.com/PointCloudLibrary/pcl/blob/master/common/src/io.cpp#L192

What happens is that we have 123,294,283 points, multiplied by 3 (x, y, and z), and 8 bytes per field. This makes an offset in this huge 'data' array of more than 2 billion, which is the limit of the int value in which it's stored. It's a bit of a pity to use a signed int for this purpose (an unsigned would have allowed for twice larger arrays, but ideally an 64bits integer would have avoided the issue completely).

The conclusion is that PCL doesn't seem to be used with such large clouds... I'll try to patch PCL on my side at least and see if it helps.

dgirardeau commented 1 year ago

Ok, this works fine. That's really silly to use a signed int instead of a unsigned long long...

For reference, here is the patched code (lines 182 to 206 of io.cpp):

  // Iterate over each point and perform the appropriate memcpys
  std::size_t point_offset = 0;
  for (uindex_t cp = 0; cp < cloud_out.width * cloud_out.height; ++cp)
  {
    memcpy (&cloud_out.data[point_offset], &cloud2.data[cp * cloud2.point_step], cloud2.point_step);
    pcl::uindex_t field_offset = cloud2.point_step;

    // Copy each individual point, we have to do this on a per-field basis
    // since some fields are not unique
    for (std::size_t i = 0; i < cloud1_unique_fields.size (); ++i)
    {
      const pcl::PCLPointField& f = *cloud1_unique_fields[i];
      pcl::uindex_t local_data_size = f.count * static_cast<pcl::uindex_t>(pcl::getFieldSize (f.datatype));
      pcl::uindex_t padding_size = field_sizes[i] - local_data_size;

      memcpy (&cloud_out.data[point_offset + field_offset], &cloud1.data[static_cast<std::size_t>(cp) * cloud1.point_step + f.offset], local_data_size);
      field_offset +=  local_data_size;

      //make sure that we add padding when its needed
      if (padding_size > 0)
        memset (&cloud_out.data[point_offset + field_offset], 0, padding_size);
      field_offset += padding_size;
    }
    point_offset += field_offset;
  }

I'll just need to figure out how to share this with the original project.

dgirardeau commented 1 year ago

And I've updated the latest 2.13.alpha version online.

than2 commented 1 year ago

Great, thanks, daniel, I tested the new cloud compare version, it works fine now.

Could you also update this in the new verison of cloudcompy?

I apprieated for your help!!!

best, Tao

than2 commented 1 year ago

I did check the pcl package of cloudcompy, and I do not find io.cpp, but io.hpp.

I guess the reason is because the compiled version of cc is 1.12 ?

Also, let you know, I guess there is a small bug in Cloud Compare? when I delete the cloud from left panel, the cloud still show but cannot remove in the window .

Capture

dgirardeau commented 1 year ago

For CloudCompy, I guess Paul would have to patch its own version of PCL (this is something that has to be solved at compilation time). That's why it's a bit tricky to rely on a patch of PCL code (but I don't see another mean to fix this issue).

And for the other issue, this is a bit strange, I can't reproduce it on my side. I guess it's a FBO issue or something like that. What happens if you try to rotate the view? Does the cloud finally disappear?

than2 commented 1 year ago

Indeed, there should be some updates of pcl 1.12, the reason why I need to use e57 is that it contains scan positions, and also, leica software cannot export pcd file. However, when doing calculations, we found pcd file is more stable. let's wait for paul's response then.

I see, thanks for the explaination, daniel, I did change another computer to run, the cloud finally disappear then. there should be some problems of that computer.

prascle commented 1 year ago

Great job Daniel! CloudComPy is built with the PCL libraries from the Conda packages. I will have to rebuild at least libpcl_io (.dll and .so) and add it to my package. This should not be difficult, but I need to make some adjustments to my process. I hope to be able to do it in the next few days. Paul

than2 commented 1 year ago

Thank for letting me know, paul ! your guys are super helpful !

Just take you time, and please let me know when you have new updates.

best, Tao

than2 commented 1 year ago

Hi, paul,

just wonder that have you already updated libpcl_io (.dll and .so) ?

best, Tao

prascle commented 1 year ago

Hello Tao, I am preparing a new version of CloudComPy (Linux and Windows), with several new features and bug fixes, including this one. This one works on Linux, but I have some regressions with the latest developments on CloudCompare, which have not been fixed yet. Unfortunately, I won't be able to work on CloudComPy for the next 2 weeks. Sorry for the delay.

best, Paul

than2 commented 1 year ago

thanks for letting me know, paul,

best, Tao