Closed PsycIT closed 4 years ago
Hi @PsycIT,
Are you able to record IR stream using RealSense Viewer?
Thank you!
Hi @PsycIT,
Are you able to record IR stream using RealSense Viewer?
Thank you!
Using the viewer, i can get a .bag file that contains three streaming videos: color, depth, and infrared. But I want to create a program that applies my own UI using the SDK. Do you know which format to use to store infrared information and how to save it?
@PsycIT
Did you fix this problem?
I have bag files created using RealSenseD435 camera. I want to convert RGB and Depth image into avi or any other format which can input into Pose Estimation model.
Second, these bag files are very big like in 3 to 5 GB for 1 to 2 minutes video. How can I crop object within the video and also decrease frame per second rate when converting into avi or anyother format from bag file.
@PsycIT,
Please take a look at https://github.com/IntelRealSense/librealsense/tree/master/examples and let us know if you need an additional help with that matter.
Thank you!
Issue Description
Hello. I am working with realsense D415 cameras to store color, depth and infrared images and bag files and to get depth information from the saved bag files. There is a blockage in the implementation of the program, please give some advice for me. If you give me a little bit of advice, it would be very helpful.
Currently, in Windows environment, while saving the color, depth, and infrared videos as .avi files using the pyrealsense2 library, the color and depth video are saved, but there is a problem in saving the infrared video. To be more accurate, I'm creating a program that adds UI with pkinter to show 3 streaming videos, and capture image with snapshot button, start recording with start button, and end recording with end button. But now, only infrared video is not saved. I wrote code to save an infrared video, like two completely stored videos. What did I miss?
And to save the .bag file from the moment the start button is pressed until the end button is pressed, I implemented as below source. The .bag file is saved now, but when I open it with the Intel realsense viewer program, but I get the following error: 'Failed to load file, Reason: Io in rs2_context_add_device (ctx: 0000029BE3F63D30, file: C): Failed to create ros reader: Bag unindexed' error. . However, I can extract png files and .npy files from the bag file(an error occurs when opening the viewer) using the source listed at the bottom of the #3029. What would I change to open the bag file with the viewer program?
ps. The upper part is the most urgent question. There are too many questions in this post, so you can skip the questions below.
In addition, I'd like to ask a little bit differently. My ultimate goal is to use a 2D, 3D camera to acquire videos of human upper body movements (especially hand movements), and train this data with deep learning models to create a classifier that identifies specific movements(eg, classification of types between hand movement gestures).
The question is, Q1) I am wondering if I need to use ROS's rosbag filter to get the depth value from the bag file or is there an easier way? Is there a way to save depth information directly as an .avi file rather than .bag file? And Can i get things like depth information from .avi? (If depth or infrared information can be stored in the .avi file, would it be easier to use the avi file than extract it from the bag file?)
Q2) Is there only depth and infrared information in the bag file, not color information? If I need color information as well, do I have to save color video instead of just .bag files?
Q3)I don't have much knowledge about deep learning yet, If I use video data (including depth information) instead of image learning, can I ask how it is done? (Video is a set of image frames, so can i understand that learning video is just image learning with more images? Isn't this field different from learning images?) And in addition to depth information, does infrared information or something else be necessary to classify a person's pose? After all, I'm implementing a program that collects data to create an upper body pose classifier. What information can I use as input data when using a 3D camera? color, depth information? And is the data required for the input data the color and depth information extracted from the .bag file (information on three axes x, y, and z?)? I'm rambling a little bit because I didn't know the data needed to learn video using 3D camera in deep learning.
I have a lot of questions. Sorry for the inconvenience.. Have a good night!
src link: https://github.com/PsycIT/pyrsRecording/blob/master/RecordingApp.py
src
class RecordingApp: def init(self): self.thread = None self.stopEvent = None
if name == 'main': ra = RecordingApp("image/", "video/") ra.root.mainloop()