[ ] Experiment with using depth sensor data for foreground isolation within OpenCV
[ ] Apply appropriate filters to reduce noise in both RGB and depth images. Common filters include Gaussian or median filters.
[ ] Consider using adaptive filtering techniques that adjust filter parameters based on local image characteristics.
[ ] Perform depth map enhancement techniques to improve depth perception, such as edge-preserving smoothing or hole filling algorithms.
[ ] Consider techniques like bilateral filtering or guided filtering to preserve depth edges while smoothing.
[ ] Ensure proper alignment of RGB and depth images through calibration and registration processes.
[ ] Use techniques like feature-based registration or intensity-based registration to align the two modalities accurately.
[ ] Normalize depth values to a consistent range to improve comparability between frames and scenes. Techniques such as min-max normalization or z-score normalization can be used depending on the application.
[ ] Extract relevant features from both RGB and depth images to reduce the dimensionality of the data and focus on important information.
[ ] Utilize techniques like blob detection, edge detection, or texture analysis to extract meaningful features.
[ ] Optimize algorithms and processing pipelines for real-time performance by using efficient data structures and parallelization techniques.
[ ] Consider deploying preprocessing steps on hardware with GPU acceleration to speed up computations.
[ ] Utilize asynchronous processing and buffering techniques to overlap computation with data acquisition and transfer.
[ ] Use compression standards like JPEG or PNG for RGB images and compression algorithms like run-length encoding or delta encoding for depth data.
[ ] Hardware Synchronization: Cameras with external trigger inputs can be synchronized using a common trigger signal, ensuring simultaneous capture of RGB and depth frames.
[ ] Hardware Timestamping: Use hardware timestamping mechanisms provided by cameras to ensure accurate timing information for each captured frame. Utilize these hardware timestamps during synchronization to align frames based on their actual capture times.
[ ] Use techniques like interpolation or resampling to estimate the temporal offset between frames and align them accordingly.
[ ] Adjust the timing of one of the streams based on the computed offset to achieve synchronization.
[ ] Apply Kalman filtering techniques to estimate the temporal alignment between RGB and depth frames. Utilize the predicted alignment to adjust the timing of one of the streams for synchronization.
[x] Record initial values for camera calibration settings