Closed eric-stumpe closed 1 year ago
Hi @WolfRoyal You mention that you have other additional cameras in the setup. Are they all L515, or a mix of L515 and 400 Series cameras?
In an all-L515 setup, cameras whose field of views overlap can experience interference. This can be resolved by using the multi-camera synchronization described in the Intel white-paper document at the link below.
https://dev.intelrealsense.com/docs/lidar-camera-l515-multi-camera-setup
If the other cameras are 400 Series stereo depth camera models and they are active at the same time as the L515 then their infrared emissions may be interfering with the L515's image, as L515 is sensitive to infrared light sources.
Hi @MartyG-RealSense,
Thanks for the quick reply! The other cameras are not from intelrealsense (hyperspectral and thermal cameras). Recordings are made sequentially (e.g., first thermal camera, then 1 or 2 seconds later the lidar camera --> they should not interfere with one another).
It may be worth trying to change the L515's Visual Preset camera configuration to see whether the results are more stable, as alternating between the two images gives me the impression that it may be capturing real-time changes in lighting conditions in the scene. Python code for setting a preset is at https://github.com/IntelRealSense/librealsense/issues/8161#issuecomment-761908864
If there is no natural light in the scene, use no_ambient_light.
If there is some natural light in the scene, use low_ambient_light.
If the image is noisy, use short_range.
Hi @MartyG-RealSense,
thanks for the suggestions. I did some experiments with the different presents, but sadly this did not lead to improvements. But I think I was able to narrow down the source of the problem. The bigger distortion effects only ocurr for image sequences where the pipeline is stopped and restarted again between recordings. Therefore this does not happen once a pipeline is started and all recordings are made without stopping the stream (I tested this for both Python and within the Intelrealsenseviewer --> enabling/disabling the infrared stream).
I guess with doing the recordings without stopping the stream I could now decrease the distortion variation within a sequence. The problem I see is still the differences that ocurrs when stopping and restarting. E.g., if I do the camera calibration for a non stop streaming sequence, the computed distortion parameters might differ from those that I would need for the actual experiments (with a fresh start of the stream).
Do you have any other ideas regarding this?
When the pipeline is started and auto-exposure is enabled, it can take several frames for the auto-exposure to settle down. This does not occur when starting the pipeline with auto-exposure disabled. Bad frames resulting from this settling-down period can be avoided by placing a 'for' instruction on the line before the wait_for_frames instruction so that the first several frames are skipped, like the code below.
for i in range(4):
frames = pipeline.wait_for_frames()
Thank you for the advice, I am going to try out your suggested for-loop modification. One question: what would be the auto exposure option for the depth/infrared sensor you are referring to? Because I can only find this option for the RGB sensor, or am I overlooking something?
Depth/Infrared sensor
for f in pipeline_profile.get_device().query_sensors()[0].get_supported_options(): print(f)
option.visual_preset
option.frames_queue_size
option.error_polling_enabled
option.depth_units
option.inter_cam_sync_mode
option.ldd_temperature
option.mc_temperature
option.ma_temperature
option.global_time_enabled
option.apd_temperature
option.depth_offset
option.freefall_detection_enabled
option.sensor_mode
option.host_performance
option.humidity_temperature
option.enable_max_usable_range
option.alternate_ir
option.noise_estimation
option.enable_ir_reflectivity
option.digital_gain
option.laser_power
option.confidence_threshold
option.min_distance
option.receiver_gain
option.post_processing_sharpening
option.pre_processing_sharpening
option.noise_filtering
option.invalidation_bypass
RGB sensor
for f in pipeline_profile.get_device().query_sensors()[1].get_supported_options(): print(f)
option.visual_preset option.frames_queue_size option.error_polling_enabled option.depth_units option.inter_cam_sync_mode option.ldd_temperature option.mc_temperature option.ma_temperature option.global_time_enabled option.apd_temperature option.depth_offset option.freefall_detection_enabled option.sensor_mode option.host_performance option.humidity_temperature option.enable_max_usable_range option.alternate_ir option.noise_estimation option.enable_ir_reflectivity option.digital_gain option.laser_power option.confidence_threshold option.min_distance option.receiver_gain option.post_processing_sharpening option.pre_processing_sharpening option.noise_filtering option.invalidation_bypass option.backlight_compensation option.brightness option.contrast option.exposure option.gain option.hue option.saturation option.sharpness option.white_balance option.enable_auto_exposure <---- option.enable_auto_white_balance option.frames_queue_size option.power_line_frequency option.auto_exposure_priority option.global_time_enabled option.host_performance
You are correct, depth exposure is an RGB-only option on the L515 camera model. I do apologize.
No worries.
I have now implemented the for loop into the code. Quualitatively, there seems to be some improvement regarding the distortion effects.
I have created some GIFs for the following different settings (10 images recorded for each setting)
Pipeline is called each time - no for-loop
Pipeline is called each time - for-loop 60 frames before saving
REFERENCE: all images recorded within the same stream
Thanks very much for the update confirming some improvement when using a for-loop.
Do you see any improvement if you change wait_for_frames() to poll_for_frames()
The difference between them is that wait_for_frames blocks until a complete frame is available, whilst poll_for_frames returns frames instantly without any blocking.
Thanks for the help again,
is there anything else I have to change in the code, to get poll_for_frames() working? Because I get the error "RuntimeError: null pointer passed for argument "frame"" for the subsequent "frameset = align.process(frames)" line. also frames.size() is 0 for frames = pipeline.poll_for_frames() (it is 3 for frames = pipeline.wait_for_frames())
Does the error still occur if you insert the following line after the poll_for_frames line:
frames.keep()
The keep() instruction stores frames in the computer's memory.
I added the frames.keep() statement, but I still get the same error message for "frameset = align.process(frames)".
If the problem is primarily due to stopping and starting the stream, an alternative may be to go back to using wait_for_frames() and have continuous streaming but set the Laser Power option to '0' when the L515 is not capturing, then set it to '100' before taking the capture. This should help to prevent the L515 interfering with the non-RealSense cameras with IR emissions when L515 is not capturing.
Set to 0 sensor.set_option(rs.option.laser_power, 0
Set to 100 sensor.set_option(rs.option.laser_power, 100
Hi @MartyG-RealSense,
I tried your suggested modification, but it did not lead to any notable improvements.
No laser power turn on/turn off:
With laser power turn on/turn off:
A technique that I have seen used to simulate a playback pause in a live stream so that a frame can be stably captured is to perform stop and then write the image to file (since the frame data is stored in the numpy array even though the pipeline has been stopped). Then after capture, the pipeline is 'unpaused' with start().
Do you have more stable results if you move the image writing code to after pipeline.stop() please?
finally:
# Stop streaming
pipeline.stop()
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
depth_colormap_dim = depth_colormap.shape
color_colormap_dim = color_image.shape
cv2.imwrite(path+'/' + "setID" + str(counter)+ '_' +date + '_depth.png', depth_colormap)
cv2.imwrite(path+'/'+ "setID" + str(counter)+ '_' +date + '_color_mapped.png', color_image)
cv2.imwrite(path+'/'+"setID" + str(counter)+ '_' +date + '_infrared.png', IR_image)
I will try it out. However, for the moment the lidar camera is in use, but I will get back to this thread and let you know the results once it is available to me again. Thank you again!
Hi @WolfRoyal Do you have an update about this case that you can provide, please? Thanks!
Hi @MartyG-RealSense, Thank you for asking. Unfortunately I will only be able to continue working with the camera in two weeks' time. Sorry for the inconvenience.
It's no trouble at all to keep the case open for a further time period. Thanks very much for the update!
Hi @WolfRoyal Do you have a new update about this case that you can provide, please? Thanks!
Hi @MartyG-RealSense, I am sorry for the delayed response. Currently we have some general issues with our setup (not related to the lidar camera). So I don't know when I will be able to work at the setup again. Do you think it's feasible to close this issue for know, and I would reopen it once I get to work at it again? Anyways, thank you for all the suggestions so far, and I will definitely try out your latest suggestion next.
It is no problem at all to close this issue for the moment and re-open it at a later date when you are able to return to it.
okay perfect. Thank you.
Issue Description
Hi,
I am having the following issue: When taking consecutive infrared image recordings (L515) via pyrealsense2, varying distortion effects start to ocurr (see the two images below: Most noticable for the leaves on the left --> best viewed by downloading both images and switching between them). The main problem with this is that I am using the infrared images for the extrinsic calibration with other additional cameras in the setup. Therefore, the distortion also affects the camera calibration (varying corner positions in the checker board image --> worse reprojection error).
My question is now :
Is this is a hardware issue, or if there any settings I can use to fix this problem?
Thank you very much!
For reference, I have also included the full code (how the infrared images are saved)
for each recording the following code is executed:
` def save_IRdepthRGB(path, counter, constants):
`