BreeeZe / rpos

Raspberry Pi Onvif Server
http://breeeze.github.io/rpos
MIT License
647 stars 147 forks source link

raspberry pi streaming with python picamera #60

Open cfdcfc opened 5 years ago

cfdcfc commented 5 years ago

I have successfully install the RPOS and able to run it on raspberry Pi. For rasberry Pi, the python library Picamera support output the video to file and stream at same time, i.e.

camera.start_recording('1.h264', splitter_port=1) 
camera.start_recording(stream, format='h264', splitter_port=2

I am wondering how the above codes can be combined with the RPOS, i.e. RPOS open a stdin stream and the above script output h264 to the stream. Thx.

RogerHardiman commented 5 years ago

What you need to do us get the h264 stream into a RTSP server. Rpos has 3 different RTSP servers you can use. Two use Live555. One uses gstreamer.

Some of this may depend on whether you prefer c++ or python or have used both.

But you need to take one of the rtsp servers and change it to read stdio or a Named Pipe.

Do you have a preference of c++ and Live555 or pyrhon and gstreamer and I can point you at some source files

cfdcfc commented 5 years ago

@RogerHardiman thanks for the response.

I modify the launch_str in gst-rtsp-launch.py:

launch_str = "( udpsrc host=127.0.0.1 port=5000 typefind=true do-timestamp=true ! h264parse ! rtph264pay name=pay0 pt=96 )"

Then, I start the raspivid with gst-launch: raspivid -t 0 -h 1080 -w 1920 -fps 15 -b 2000000 -ih -pf high -o - | gst-launch-1.0 fdsrc ! udpsink host=127.0.0.1 port=5000

This works but I wonder if there is a better way to get rid of the udp?

RogerHardiman commented 5 years ago

In the case of gstreamer you can make gstreamer read from StdIn. There is a Gstreamer module for it but I cannot remember what it is called. (@Schwaneberg wrote the GStreamer support)

The other way which does not use a local socket is to use a Unix Named Pipe. You create a 'virtual file' and write to the file. Then another process reads from the file.

https://www.linuxjournal.com/article/2156

RogerHardiman commented 5 years ago

Were you just trying raspivid as a quick test, before moving over to the Python Camera Library?

cfdcfc commented 5 years ago

@RogerHardiman

For Python Camera Library I am doing something like this:

from picamera import PiCamera
import shlex, subprocess

camera = PiCamera()

cmd = "gst-launch-1.0 fdsrc ! udpsink host=127.0.0.1 port=5000"
cmd = shlex.split(cmd)
gstreamer = subprocess.Popen(cmd, stdin=subprocess.PIPE)

camera.start_recording('test.h264', format='h264', splitter_port=1)
camera.start_recording(gstreamer.stdin, format='h264', splitter_port=2)

Combine this with RPOS to achieve both recording and streaming.

Schwaneberg commented 5 years ago

Hello! GStreamer offers a lot of possibilities. The following example splits the stream using tee, creates a H264 RTSP video stream and also provides a raw video stream via shared memory:

"( rpicamsrc preview=false bitrate=10000000 keyframe-interval=15 ! video/x-h264, framerate=30/1, width=1920, height=1080 ! h264parse ! tee name=t ! queue ! rtph264pay name=pay0 pt=96 t. ! queue ! omxh264dec ! shmsink socket-path=/tmp/sharepoint sync=false wait-for-connection=false shm-size=10000000 buffer-time=50000000 )"

We can also record the stream like this:

"( rpicamsrc preview=false bitrate=10000000 keyframe-interval=15 ! video/x-h264, framerate=30/1, width=1920, height=1080 ! h264parse ! tee name=t ! queue ! rtph264pay name=pay0 pt=96 t. ! queue ! mp4mux ! filesink location=video.mp4 )"

Am Fr., 25. Jan. 2019 um 11:36 Uhr schrieb cfdcfc <notifications@github.com

:

@RogerHardiman https://github.com/RogerHardiman

For Python Camera Library I am doing something like this:

from picamera import PiCamera import shlex, subprocess

camera = PiCamera()

cmd = "gst-launch-1.0 fdsrc ! udpsink host=127.0.0.1 port=5000" cmd = shlex.split(cmd) gstreamer = subprocess.Popen(cmd, stdin=subprocess.PIPE)

camera.start_recording('test.h264', format='h264', splitter_port=1) camera.start_recording(gstreamer.stdin, format='h264', splitter_port=2)

Combine this with RPOS to achieve both recording and streaming.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/BreeeZe/rpos/issues/60#issuecomment-457530354, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahnptd0QYHvGUnp-03TEh1fS8txOVr4Kks5vGt4agaJpZM4aObmc .

Schwaneberg commented 5 years ago

...and of course you can reverse the pipe, e.g. by using filesrc or shmsrc to read from files or shared memory and stream or play.

Am Fr., 25. Jan. 2019 um 19:32 Uhr schrieb Oliver Schwaneberg < oliver.schwaneberg@gmail.com>:

Hello! GStreamer offers a lot of possibilities. The following example splits the stream using tee, creates a H264 RTSP video stream and also provides a raw video stream via shared memory:

"( rpicamsrc preview=false bitrate=10000000 keyframe-interval=15 ! video/x-h264, framerate=30/1, width=1920, height=1080 ! h264parse ! tee name=t ! queue ! rtph264pay name=pay0 pt=96 t. ! queue ! omxh264dec ! shmsink socket-path=/tmp/sharepoint sync=false wait-for-connection=false shm-size=10000000 buffer-time=50000000 )"

We can also record the stream like this:

"( rpicamsrc preview=false bitrate=10000000 keyframe-interval=15 ! video/x-h264, framerate=30/1, width=1920, height=1080 ! h264parse ! tee name=t ! queue ! rtph264pay name=pay0 pt=96 t. ! queue ! mp4mux ! filesink location=video.mp4 )"

Am Fr., 25. Jan. 2019 um 11:36 Uhr schrieb cfdcfc < notifications@github.com>:

@RogerHardiman https://github.com/RogerHardiman

For Python Camera Library I am doing something like this:

from picamera import PiCamera import shlex, subprocess

camera = PiCamera()

cmd = "gst-launch-1.0 fdsrc ! udpsink host=127.0.0.1 port=5000" cmd = shlex.split(cmd) gstreamer = subprocess.Popen(cmd, stdin=subprocess.PIPE)

camera.start_recording('test.h264', format='h264', splitter_port=1) camera.start_recording(gstreamer.stdin, format='h264', splitter_port=2)

Combine this with RPOS to achieve both recording and streaming.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/BreeeZe/rpos/issues/60#issuecomment-457530354, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahnptd0QYHvGUnp-03TEh1fS8txOVr4Kks5vGt4agaJpZM4aObmc .

RogerHardiman commented 5 years ago

Hi. In the gstreamer example from @Schwaneberg I noticed the trick of passing the H264 bitstream back through a H264 decoder (via OMX) to get raw video. I'd used a hack a bit like that in RPOS where I fired up a background ffmpeg instance to connect to the RTSP server so I could get a JPEG image.

It is a neat hack.

It would be better if we could ask the camera for H264 and raw video frames at the same time. I did wonder if the Python Camera Library can do this.

cfdcfc commented 5 years ago

I notice that in gst-rtsp-launch.py there is restriction on fps: self.fps_range = [15, 90]

I wonder the reason of this restriction on fps and if it's possible to lower the fps to like 1?

cfdcfc commented 5 years ago

The fps could be lower to like 1, but the streaming will be ready only if a key-frame is received.

In Picamera, camera.request_key_frame(splitter_port=1) Request the encoder generate a key-frame as soon as possible.

Schwaneberg commented 5 years ago

Low frame rates can be achieved using the still image mode. But I think the restriction to 15 - 90 fps isn't a bad thing. We just need to communicate the property to the client. But beware: 90 fps is only available at 640x480! 1920x1080 (FHD) is limited to 30 fps. Best regards!

Am Mi., 30. Jan. 2019 um 04:50 Uhr schrieb cfdcfc <notifications@github.com

:

The fps could be lower to like 1, but the streaming will be ready only if a key-frame is received.

In Picamera, camera.request_key_frame(splitter_port=1) Request the encoder generate a key-frame as soon as possible.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/BreeeZe/rpos/issues/60#issuecomment-458802010, or mute the thread https://github.com/notifications/unsubscribe-auth/AhnptXJg5pFZykcyRhswnADtZCh8J-5fks5vIRaQgaJpZM4aObmc .

thezab commented 3 years ago

@RogerHardiman about your comment where you were successfully able to get a jpeg image could you tell me more about this how did you do it?

RogerHardiman commented 3 years ago

ffmpeg -fflags nobuffer -probesize 256 -rtsp_transport tcp -i rtsp://127.0.0.1:${this.config.RTSPPort}/${this.config.RTSPName} -vframes 1 -r 1 -s 640x360 -y /dev/shm/snapshot.jpg

RogerHardiman commented 3 years ago

in the services/media files in RPOS is code to launch ffmpeg, make ffmpeg connect to the local RTSP server and to extract 1 jpeg

uglurass commented 3 years ago

sure

On Thu, Mar 11, 2021, 3:06 PM thezab @.***> wrote:

@RogerHardiman https://github.com/RogerHardiman about your comment where you were successfully able to get a jpeg image could you tell me more about this how did you do it?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BreeeZe/rpos/issues/60#issuecomment-796760418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQVSN7BJVVB6L53SNBRQNLTDDE57ANCNFSM4GRZXGOA .

uglurass commented 3 years ago

just calll me

On Thu, Mar 11, 2021, 3:06 PM thezab @.***> wrote:

@RogerHardiman https://github.com/RogerHardiman about your comment where you were successfully able to get a jpeg image could you tell me more about this how did you do it?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BreeeZe/rpos/issues/60#issuecomment-796760418, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQVSN7BJVVB6L53SNBRQNLTDDE57ANCNFSM4GRZXGOA .

AnishDey27 commented 3 years ago

Use socket stream for video stream for ultra low streaming latency up to 20-25 FPS.

Server:- import socket, cv2, struct server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host_name = socket.geathostname() host_ip = socket.gethostbyname(host_name) print('HOST IP:', host_ip) port = 9999 socket_address = (host_ip, port) server_socket.bind(socket_address) server_socket.listen(5) print("LISTENING AT:", socket_address)

while True: client_socket, addr = server_socket.accept() print('GOT CONNECTION FROM:', addr) if client_socket: vid = cv2.VideoCapture(0)

    while (vid.isOpened()):
        img, frame = vid.read()
        a = cv2.imencode('.jpg', frame)[1].tostring()
        message = struct.pack("Q", len(a)) + a
        client_socket.sendall(message)
        cv2.imshow('TRANSMITTING VIDEO', frame)
        key = cv2.waitKey(1) & 0xFF
        if key == ord('q'):
            client_socket.close()

Client:-

import socket, cv2,struct, time import numpy as np client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host_ip = '192.168.137.1' port = 9999 client_socket.connect((host_ip, port)) data = b"" payload_size = struct.calcsize("Q") while True: start = time.perf_counter() while len(data) < payload_size: packet = client_socket.recv(4 * 1024) # 4K if not packet: break data += packet packed_msg_size = data[:payload_size] data = data[payload_size:] msg_size = struct.unpack("Q", packed_msg_size)[0]

while len(data) < msg_size:
    data += client_socket.recv(4 * 1024)
frame_data = data[:msg_size]
data = data[msg_size:]
nparr = np.fromstring(frame_data, np.uint8)
frame = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
stop = time.perf_counter()
cv2.imshow("RECEIVING VIDEO", frame)
print(1/(stop - start), "seconds")
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
    break

client_socket.close()