MarekKowalski / LiveScan3D

LiveScan3D is a system designed for real time 3D reconstruction using multiple Azure Kinect or Kinect v2 depth sensors simultaneously at real time speed.
MIT License
749 stars 202 forks source link

Can LiveScan3D connect two KinectV2 cameras to output depth and RGB images #74

Open zlyliangyu opened 1 month ago

zlyliangyu commented 1 month ago

Hello, I would like to ask if LiveScan can export depth images and RGB images. I have tried many versions, but can only output point cloud files after ICP fusion, because my research requires depth images, but LiveScan did not find this feature. I connected two KinectV2 cameras on Ubuntu based on ROS and did not know how to solve the calibration problem, so I am very distressed

ChristopherRemde commented 1 month ago

Hey! Unfortunately depth and RGB export functionality is only implemented for Azure Kinects at the moment. More sensors will be added in the future, including backwards compability for the Kinect v2, but this might take a while.

If you only need depth and rgb images however, it should probably be pretty doable to program a small python program which does this, with the Kinect v2 Python library. https://github.com/Kinect/PyKinect2

zlyliangyu commented 1 month ago

Thank you very much for sharing. I am very confused about the knowledge of calibration. On LiveScan, I can calibrate two cameras to the same world coordinate system based on the calibration paper, but I don't understand how to calibrate the two cameras using other methods. Because I need to process the images captured by the two cameras first, and finally need to fuse the point clouds captured by the two cameras, I am worried that the ICP fusion effect will be very poor if I don't calibrate them later

ChristopherRemde commented 1 month ago

Hey! I see, yes that makes everything a bit more complicated. The new version of LiveScan3D im currently working (https://github.com/BuildingVolumes/LiveScan3D) on allows for this workflow, but unfortunately only for Azure Kinect devices. I will add support for the Kinect v2 at a later point, but not in the near future, as I don't have the resources to do this at the moment.

I'm also not aware of alternative software that allows for this, but maybe Brekel Pointcloud v2 is able to do this? It has a free trial:

https://brekel.com/pointcloud-v2/

zlyliangyu commented 1 month ago

Thank you very much for sharing. I'll try it out

zlyliangyu commented 1 month ago

Wait a moment, are you saying that Azure Kinect can output point clouds and images simultaneously on Livescan? So is the output image a two frame image without fused two cameras.

ChristopherRemde commented 1 month ago

Not simultaneously, you capture either raw images or pointclouds. But I have a tool with which you can convert these raw images into a pointcloud again after the capture. This is no special hardware feature of the Azure Kinect, the Kinect v2 could do the same, but I developed the new version of LiveScan for the Azure Kinect only (so far), as it provides better image quality.

zlyliangyu commented 1 month ago

Sorry, I may not understand what you mean. Are you saying that the new version of Livescan can allow Azure Kinect to capture images and convert them into point clouds? I understand this part. So, will the new version of Livescan output images or point clouds

ChristopherRemde commented 1 month ago

Sorry for the confusion. When you capture, you only capture the color and depth images. After your capture has finished, you then use another tool to convert these images to pointclouds. So you can have both as an output, but the pointclouds will only be output after the capture and conversion.

zlyliangyu commented 1 month ago

I understand. Is the output depth image and RGB image similar to the previous version of Livescan point cloud output? After calibration, the left half image and the right half image are combined to form a complete image. Is this what it looks like? Is there any example of an image? I may consider purchasing this type of camera

ChristopherRemde commented 1 month ago

This is an exemplary output of one frame of a capture with five cameras. You get a .jpg with the color picture, and a 16-bit .tiff picture with the depth information for each camera. Additionally metadata about the camera intrinsics, calibration and frametiming are saved to allow for the pointcloud reconstruction later on.

Example Raw Image Data.zip

After pointcloud reconstruction, the pointcloud looks like this (this pointcloud is from a different frame of the capture!): pointcloud.zip

zlyliangyu commented 1 month ago

This point cloud and the image are not in the same frame, but thank you very much for sharing. I am a second year graduate student, and our research team all use your software. Everyone loves this software very much, and I am also very grateful to you. I have another question, which is how close the Azure Kinect is to the object and how good the collection effect is. Because my project is modeling newborns in an incubator, the point cloud effect of my KinectV2 is very poor due to being too close to the incubator, and it even splits. Due to environmental factors, I can only keep the camera at this height for collection, so I am not sure if Azure Kinect can support static distance shooting pointcloud jpg

ChristopherRemde commented 1 month ago

Oh wow interesting project! Great to your that this software is being used a lot by your team!

So there seems to be a lot of factors involved here. Generally, the Azure Kinect has a much higher pointcloud quality than the Kinect v2. But I guess you are shooting through the incubator glass? This might introduce refraction effects, which could lower the capture quality. Additionally, the infrared light might be partially blocked by the glass, depending on the material. Maybe you can try shooting with a puppet at first, without any glass, to see how much it affects the capture.

Also the quality of the calibration has a high impact on the capture quality. Unfortunately the calibration quality is not very high at the moment, but we're working on improving this!

zlyliangyu commented 1 month ago

Yes, the shooting process will be affected by the glass obstruction, I used two KinectV2 can only shoot from the limited space that can be opened on both sides of the incubator, which is very difficult, I considered using two RealSense to record video in the incubator, but I still don't know how to calibrate the two cameras online.

ChristopherRemde commented 1 month ago

Hey! Yeah calibration is unfortunately not super trivial. I don't know how experienced you are with programming in general, but you may need to write your own software for this.

You'll need a marker, take captures of the marker with both cameras and then extract the position and rotation of the camera. It is certainly an advanced task, but possible with libraries like OpenCV.

https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html

Here is a tutorial which I can recommend!

zlyliangyu commented 1 month ago

My coding skills are not sufficient to develop calibration software, I am sorry. Recently, I have been trying to switch to Azure Kinect, so that I can ensure the output of images. Besides, it is smaller than Kinect V2. I am considering placing it in an insulated box, but I am not sure if I can successfully calibrate and record point clouds at a relatively close distance. Additionally, when compiling the Livescan-development software, after running it, I only have the client and not the server. I am very frustrated.

ChristopherRemde commented 1 month ago

Please just use the Pre-Release downloads available here :)

https://github.com/BuildingVolumes/LiveScan3D/releases/tag/v1.2.alpha1

If you encounter any issue, it would be great if you could open an issue in that repository, happy to help!

zlyliangyu commented 1 week ago

I'm sorry, I encountered another difficulty. Instead of using Azure to collect data, my teacher instructed me to use ROS to connect two RealSense cameras. Through Python code, I calculated the distances between corners of a checkerboard pattern, derived the relative transformation matrix from camera 2 to camera 1's coordinate system, and then reconstructed it into a point cloud. However, after visualizing the point cloud, I noticed cracks in overlapping areas and splitting at the top. I specifically need the point cloud of the chest area. I applied ICP and filtering to the point cloud, but the cracks persist. I don't understand the reason; previously, using LiveScan produced separate left and right half-point clouds that aligned perfectly. 111111

ChristopherRemde commented 1 week ago

Hey! Just as a disclaimer: As this is outside of the scope of Livescan3D support, I won't be able to give you detailled feedback. This can have multiple reasons, for example imprecise depth data from the cameras, or unprecise calibration. You could try taking multiple samples with the checkerboard and then averaging them.

zlyliangyu commented 1 week ago

Do you mean to calculate the relative change matrix by averaging multiple sizes of chessboards

ChristopherRemde commented 1 week ago

Hey! No I mean not multiple sizes, but moving the chessboard into different positions and rotations (about 20 different poses), calculating the relative transformation matrix for each pose, and then averaging all matrices into one. When you take only one sample, the calibration can be inaccurate.