Insta360Develop / CameraSDK-Cpp

CameraSDK-Cpp is a C++ library to control Insta360 cameras.
https://www.insta360.com
115 stars 16 forks source link

preview live streaming #10

Open edomil90 opened 2 years ago

edomil90 commented 2 years ago

Hello everyone,

I am trying the example of CameraSDK and when I select the index (10) I read "successfully started live stream" but nothing happens. I saw the two files .h264 appearing in the folder but I cannot open them (I tried with VLC). Am I missing something here?

Ben93kie commented 1 year ago

I'm wondering the same thing.

Ben93kie commented 1 year ago

Ok, for me it worked to view the camera stream in VLC. But in the nature of writing a file, I cannot get an actual "live" stream, but only a "video file" that is played from the point of starting the stream to the point of opening it. I guess I have to write a new program to realize that. Or is there any other "hacky" way to get the live stream? Just for displaying. At a later point, I'd have to write custom software anyways.

ricardosutana commented 1 year ago

Hi @edomil90 and @Ben93kie I'm working on the same thing. I'm trying to figure out a way to capture those stream on opencv. But for now, I couldn't achieve any success. let's keep working and posting our progress here! @Ben93kie how did you setup you VLC to capture the live stream?

Ben93kie commented 1 year ago

I literally only opened up VLC and opened the .h264 file and I saw the video as described earlier. Sounds good, I managed to recompile their code and could change stuff. Just not sure what and how I can change.

ricardosutana commented 1 year ago

@Ben93kie I found a way to use openCV to manipulate the images. I put the file that you mentioned (h264) as the input to opencv and get the stream. For today I got an extremely delayed image. I think it is due to de bitrate param on main.cc file that is to large (1024*1024)/2). Later I will test with another value.

First, run the executable and hit the option number 10 for live stream.

Then run in another terminal the code below

!/usr/bin/python3

import cv2

video de input

vid = cv2.VideoCapture("01.h264")

while(True):

ret, frame = vid.read()

scale_percent = 30 # percent of original size
width = int(frame.shape[1] * scale_percent / 100)
height = int(frame.shape[0] * scale_percent / 100)
dim = (width, height)

resized = cv2.resize(frame, dim, interpolation = cv2.INTER_AREA)

cv2.imshow('frame', resized)

if cv2.waitKey(1) & 0xFF == ord('q'):
    break

vid.release() cv2.destroyAllWindows()

Ben93kie commented 1 year ago

Hey @ricardosutana, that's great to hear! Yes, it seems the file is compatible with standard streaming clients. I suppose the delayed image comes from the fact that it does not play the stream live, but rather "play the video", with the video being the stream from the moment you enter "10" to the point of when you started displaying the stream (it does not go beyond that for me on VLC at least; haven't tried cv2 inside python yet -> is that the case for you?)

Besides, have you tried other bitrates? How would we deal with the ever-increasing file size of 1.h264 and 2.h264?

It feels to me that there is no way around adapting the main.cc and recompiling. Have tried around a bit with "writing" to a socket instead of a file, but am a C++ newbie and it seems to be hard.

ricardosutana commented 1 year ago

Hi @Ben93kie I will be working today in a different bit rates. TO se how it behave in the context o the file size. To deal the increase of the files, I think that we can read the image (wherever it comes from) before it write the image into a file. In this way we could manipulate the image to get low file size.

The whole processes is to modify and recompile the main.cc. If you get any advance could you post here?

Iwill be posting my progress on this thread. If you need some thin, email me - ricardosutana@gmail.com

Ben93kie commented 1 year ago

In the process of trying to send individual frames from the C++ application to Python via shared memory. But in the nature of being a C++ noob, this takes some time (I managed to send structs via shared memory, now aiming at individual frames).

ricardosutana commented 1 year ago

Hi @Ben93kie Accidentaly I found that the camera could be settled as a web cam..... this is what I got Screenshot from 2023-04-25 23-20-52

But I will be working on the spherical images. If it work for you let me know that I send the code

Ben93kie commented 1 year ago

How do you use it as a webcam? I.e. is it actually live without a large latency? Still on the shared memory stuff. Currently in the process of writing code to decode the video coming from the cam.

ricardosutana commented 1 year ago

To set it as a webcam to to general > USB Mode> Webcam after that you will be able to get the frames from /dev/video2 or /dev/video3

Ben93kie commented 1 year ago

Nice, thanks. Yes, I want to also work on the 360° footage. I'm not 100% sure, but I think I managed to decode the stream inside the C++ app. My plan is to push individual frames to shared memory so that I can grab them in another process (Python) for further processing. Have a look at the attached file. main_adaptation.zip

Ben93kie commented 1 year ago

I managed to decode the stream inside C++ and display it via opencv. However, there is still a considerable latency and video artifacts. I tried setting up the bitrate, but that seemed to have no effect. Not sure what to do next. Could also be that my code is inefficient and the cause for that..

You need to have ffmpeg and opencv for running.

main_adaptation.zip

Tianweihaihaihai commented 1 year ago

sdk donot support stitched live stream. you can use webcam mode if that can satisfy your demand

tsaizhenling commented 1 year ago

hi @Tianweihaihaihai the sdk preview livestream,

can you clarify if the video data streamed is the raw fisheye images? or is it the full equirect360 stitched image? or 2 separate equirect images from each lens? or crops of the equirect images from each lens?

a sample from the streams will be very helpful for someone who does not have the camera yet

bouviervj commented 1 year ago

@Ben93kie yes my code is not much different from yours, using ffmpeg library - I have an Nvidia card so I use the 'nvidia_cuvid' codec for decoding - that leads to a nv12 image format - 2 plans - Y and UV - not sure which camera you use - but I'm stitching RS 1inch camera streams - 2 lenses streams. Keep in mind that the transformation from YUV-> RGb is CPU intensive I try to avoid this transformation. I get latency from the camera - and through streaming I can not get underneath 500ms latency to the player. It might be that your algorithm on OpenCV could work with B&W frames - only luminance. My pipeline is - decode with ffmpeg -> merge streams, stitch with DX12 (Windows) -> encode with ffmpeg. I get good results in 4k 30fps single stream, and I get bad results with 6k ~ supposedly 24 fps, 2 streams - I got irregular decoding, and I'm working on optimization (it could be that we can decode streams in parallel with multithreaded strategy). VideoDecoder.zip Nothing much really in my decoding code - for your bitrate , from your computation, I'll still multiply by to get bit per seconds. You gave me the idea to cross check for that situation too - I might have a low bitrate ( but in full resolution, the IFrames are awesome B-Frames seems off - when something is moving in front of the camera, the image get a lot of artifacts - not sure what is the limitation here - nothing to see in lower resoluttions)

simbaforrest commented 10 months ago

sdk donot support stitched live stream. you can use webcam mode if that can satisfy your demand

Hi @Tianweihaihaihai, thank you for answering questions. Under the webcam mode, can we get the raw fisheye images from the two lens? Currently I can only get two rectified images from one frame, and they lose the 360 view.

karmelcorn commented 10 months ago

@Ben93kie @bouviervj , can we get 360 video from the .h264 files? Seems the like the stitcher SDK provided by Insta360 only works on .insv files

bouviervj commented 10 months ago

@karmelcorn you will have to decode the h262 video stream and stitch real time (if needed) - or take the recorded h262 and stitch it later.

Nick-0814 commented 8 months ago

sdk donot support stitched live stream. you can use webcam mode if that can satisfy your demand

Hi @Tianweihaihaihai, thank you for answering questions. Under the webcam mode, can we get the raw fisheye images from the two lens? Currently I can only get two rectified images from one frame, and they lose the 360 view.

Hi, @simbaforrest, Do you know the answer to the question? The image I read in webcam mode using opencv is also not a fisheye image, and I'm wondering if it's possible to read a 180° fisheye image in webcam mode and stitch the images from the two cameras into a 360° image.

simbaforrest commented 8 months ago

No I could not get it to work...