Open Yunya-Hsu opened 1 month ago
Hi @Yunya-Hsu As you are already using import time, you could follow the programming logic below:
It is necessary to stop the pipeline to complete the writing of the bag file, otherwise you may end up with an unreadable bag.
@MartyG-RealSense Thanks for the quick reply!
As you are already using import time, you could follow the programming logic below:
Our use case is more complicated:
For example, if recording is triggered at 00:10
, we need to capture the BAG stream from 00:08
to 00:15
(including 2 second before and 5 second after the trigger point).
In the future, the 't' trigger key will be replaced by movement detection, and the pipeline will work as follows:
It is necessary to stop the pipeline to complete the writing of the bag file, otherwise you may end up with an unreadable bag.
I noticed that in RealSense Viewer (tested with version v2.56.1
), the recording process is more streamlined - you can start and stop recording without interrupting the camera pipeline/streaming, and it automatically generates separate .bag
files.
How can I implement this same functionality in Python? I'd like to achieve continuous streaming while being able to start/stop recording at will.
@MartyG-RealSense
I have good news about question 2 (starting/stopping recording without interrupting the camera pipeline/streaming) - I've found a solution and included the sample code below. Regarding question 1 (capturing frames from 2 seconds before the trigger point), unfortunately, I haven't had any success yet...
import pyrealsense2 as rs
import numpy as np
import cv2
import time
import os
class RealSenseRecorder:
def __init__(self):
self.width = 640
self.height = 360
self.fps = 90
self.is_recording = False
self.recorder = None
self.filename = None
self.pipeline = None
self.profile = None
self.output_folder = 'recordings'
if not os.path.exists(self.output_folder):
os.makedirs(self.output_folder)
def start_pipeline(self):
self.pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, self.width, self.height, rs.format.z16, self.fps)
config.enable_stream(rs.stream.color, self.width, self.height, rs.format.rgb8, self.fps)
self.profile = self.pipeline.start(config)
def start_recording(self):
if self.is_recording:
print('is recording...')
return False
try:
self.filename = os.path.join(self.output_folder, f'video_{int(time.time())}.bag')
device = self.profile.get_device()
self.recorder = rs.recorder(self.filename, device)
self.is_recording = True
print(f'Start recording at: {self.filename}')
return True
except Exception as e:
print(f'Error on starting recording: {e}')
self.filename = None
self.recorder = None
self.is_recording = False
return False
def stop_recording(self):
if not self.is_recording:
print('Currently camera is not recording...')
return False
try:
self.recorder.pause()
print(f'Stop recording at {self.filename}')
return True
except Exception as e:
print(f'Error on stopping recording: {e}')
return False
finally:
self.is_recording = False
self.recorder = None
self.filename = None
def run(self):
self.start_pipeline()
try:
while True:
key = cv2.waitKey(1)
try:
frames = self.pipeline.wait_for_frames()
except Exception as e:
print(f'Error when getting frames: {e}')
break
if key == ord('q'):
print('stop and exit')
break
if key == ord('t'):
self.start_recording()
if key == ord('s'):
self.stop_recording()
try:
color_frame = frames.get_color_frame()
if not color_frame:
continue
color_image = np.asanyarray(color_frame.get_data())
if self.is_recording:
cv2.putText(color_image, 'REC', (30, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow('RealSense', color_image)
except Exception as e:
print(f'Error when handling color frame and image show: {e}')
finally:
if self.is_recording:
self.stop_recording()
self.pipeline.stop()
cv2.destroyAllWindows()
if __name__ == '__main__':
recorder = RealSenseRecorder()
recorder.run()
It's great to hear that you achieved a solution to question 2 - thanks so much for sharing your code!
In regard to question 2, if you are writing a bag then the pipeline should be closed at the end of the recording anyway regardless of whether you are changing the configuration or the file may be incomplete. When playing back the recorded bag, you may then get an error like this:
RuntimeError: Failed to resolve request. Request to enable_device_from_file("file.bag") was invalid, Reason: Failed to create ros reader: Bag unindexed
@MartyG-RealSense
Thanks for reminder, I have added pipeline.stop()
when closing pipeline.
Regarding the question 1 (capturing frames from 2 seconds before the trigger point), any suggestion please?
Unless the stream was being captured continuously then it would be difficult to capture the frames from 2 seconds before the trigger point, because the program has no way to know when the trigger will be activated.
If the stream was continuously active then there might be the possibility to jump back to an earlier timestamp 2 seconds old (or approximately 60 frames back if using 30 FPS) and then proceed onwards from that point.
Another possibility would be to 'cheat': when the trigger is activated, capture for 2 seconds and then activate the self.start_recording()
instruction that normally begins immediately after the 't' keypress.
@MartyG-RealSense
If the stream was continuously active then there might be the possibility to jump back to an earlier timestamp 2 seconds old (or approximately 60 frames back if using 30 FPS) and then proceed onwards from that point.
In our use case, the video stream will run continuously as we must constantly receive and analyze frames to detect fall events, so this approach seems feasible. I would like to understand how to rewind to earlier timestamps and resume recording from that point. Could you please provide some sample code to demonstrate this functionality? Thank a lot.
The first step would likely be to identify the current frame number before you can rewind. Placing a frames.get_frame_number() instruction immediately after the wait_for_frames() line might accomplish that by storing the current frame number in a variable caled frame_number.
try:
frames = self.pipeline.wait_for_frames()
frame_number = frames.get_frame_number()
print(frame_number) // print current frame number
except Exception as e:
print(f'Error when getting frames: {e}')
break
@MartyG-RealSense
Thanks for the information. While I can indeed use frames.get_frame_number()
to obtain the frame number, my question is: once I identify a specific frame number that needs to be recorded, how do I actually write it?
For example, with a camera running at 60 fps, let's say we detect a fall event at frame 1050. We need to record 2 seconds before and 5 seconds after this event, meaning we need to save frames 930 through 1350 into a .bag
file. How exactly should I implement this write operation?
Would you please provide me some potential code examples/ solutions for this recording challenge? Thanks.
https://github.com/IntelRealSense/librealsense/issues/12564 has a Python script that defines a variable called frame_index and then does some frame saving based on calculations with that saved frame number. It might provide some useful insights that you can adapt for your own application.
```
if frame_index % 5 == 0:
timestamp_per_frame = datetime.datetime.now().strftime("%Y%m%d_%H%M%S_%f")
filename_rgb = f"frame_rgb_{frame_index}_{timestamp_per_frame}.png"
filename_depth = f"frame_depth_{frame_index}_{timestamp_per_frame}.png"
cv2.imwrite(os.path.join(output_directory, filename_rgb), color_image_rgb)
cv2.imwrite(os.path.join(output_directory, filename_depth), depth_colormap)
@MartyG-RealSense Thanks for your response, but this isn't what I was looking for at all :(
I have a recorded bag file containing frames from 2 seconds before to 5 seconds after the event. Currently I need to trim this file to retain only the essential data while preserving the .bag
file format.
I found a similar discussion on the Intel community forum as they also want to edit .bag
file, but it hasn't received any responses yet.
Are there any solutions or approaches you could recommend?
https://community.intel.com/t5/Items-with-no-label/Edit-color-frames-from-the-recorded-bag-file-amp-save-as-a-new/m-p/1190925?profile.language=en
As your link suggests, if you use a pre-recorded bag file as the data source for a script instead of a live camera then by manipulating which of the bag's frames are played you could record only selected frames to a new, shorter and smaller bag file.
When seeking to skip to a particular frame of a bag file, it is advised to set the set_real_time parameter to false and then skip through the bag file sequentially from frame 1 until you reach the desired start position in the bag, like fast-forwarding through a video. In your case, that start position is 2 seconds before a certain trigger point.
If you know where the trigger point is though, then instead of saying '2 seconds before the trigger and then 5 seconds after', it may be easier to describe it as 'skipping from frame 1 to frame 1000 and then using frames 1000 to 1150 (assuming that the frames are advancing at a rate of 30 per second if the FPS was set at 30 during recording). Then close recording at the 1150 th frame.
Trimming bags in the librealsense SDK is harder than doing it in ROS though, where there are bag editing tools like the one here:
https://github.com/AIT-Assistive-Autonomous-Systems/ros2bag_tools?tab=readme-ov-file#chaining
Hi @Yunya-Hsu Do you require further assistance with this case, please? Thanks!
Issue Description
Objective: Implement a triggered recording system for a RealSense camera that captures:
Current Implementation: Using
rs.recorder.pause(recorder)
andrecorder.resume()
to control recording (example code as below).Questions:
.bag
file, I have to restart the pipeline with a new configuration, which costs 1 to 2 seconds. Is there any way to avoid stopping/starting the pipeline every time when a recording is done?Note: While searching for a solution, I saw the `save_single_frameset() method, but it doesn't meet my requirement as it saves only a single frameset.