miguelgrinberg / Flask-SocketIO

Socket.IO integration for Flask applications.
MIT License
5.37k stars 892 forks source link

SocketIO: Video Streaming from server to client #778

Closed Fizmath closed 6 years ago

Fizmath commented 6 years ago

Dear Miguel:

Hello

This is from your video streaming page + emit , server side (i removed the Return):

@app.route('/video_feed')
def video_feed():
       socketio.emit('from_flask', Response(gen(), mimetype='multipart/x-mixed-replace; 
                                                boundary=frame'),namespace='/test')

here the problems are Response and the Generator mixing with the event-driven SocketIO

client

 socket.on('from_flask', function (data) { 
          $("#image").attr("src",data) ;
           });

html

<img id="image" src="video_feed">

Would you please help me ? i know something is wrong here but i cant't figure it

Thanks a million for your great work

AhmedBhati commented 4 years ago

Performance wise i found websockets faster and better than http when using multiple clients because for every request there would a three way handshake in http whereas it only in the start in websocket.

If you want to see the difference I have added a repository in which I have used http as the protocol, i have attached a demo video of the output there. So you can compare between the either

# app.py
import base64
import time
import cv2
from time import sleep
from flask import Flask, render_template, request
from flask_socketio import SocketIO, emit
import eventlet

eventlet.monkey_patch()
app = Flask(__name__)
socketio = SocketIO(app, logger=True, async_mode='eventlet')

@socketio.on('connect', namespace='/web')
def connect_web():
    print('[INFO] Web client connected: {}'.format(request.sid))

@socketio.on('disconnect', namespace='/web')
def disconnect_web():
    print('[INFO] Web client disconnected: {}'.format(request.sid))

@socketio.on('connect', namespace='/local')
def connect_cv():
    print('[INFO] CV client connected: {}'.format(request.sid))

@socketio.on('disconnect', namespace='/local')
def disconnect_cv():
    print('[INFO] CV client disconnected: {}'.format(request.sid))

@app.route('/')
def index():
    return render_template('index.html', async_mode=socketio.async_mode)

@socketio.on('stream_request')
def stream_video(message):
    socketio.emit('stream_response', message, namespace='/web', broadcast=True)

if __name__ == '__main__':
    socketio.run(app,host="127.0.0.1", port=5000)
# camera.py
import datetime
from threading import Thread
import base64
import time
import cv2
import socketio

sio = socketio.Client(logger=True)

@sio.event
def connect():
    print('[INFO] Successfully connected to server')

@sio.event
def connect_error():
    print('[INFO] Failed to connect to server.')

@sio.event
def disconnect():
    print('[INFO] Disconnected from server.')

def encode_image(image):
    image = cv2.imencode('.jpg', image)[1].tobytes()
    image = base64.b64encode(image).decode('utf-8')
    image = f"data:image/jpeg;base64,{image}"

    return image

class FPS:
    def __init__(self):
        # store the start time, end time and total number of frames
        # that were examined between the start and end intervals
        self._start = None
        self._end = None
        self._numFrames = 0

    def start(self):
        # start the timer
        self._start = datetime.datetime.now()
        return self

    def stop(self):
        # stop the timer
        self._end = datetime.datetime.now()

    def update(self):
        # increment the total number of frames examined during the
        # start and end intervals
        self._numFrames += 1

    def elapsed(self):
        # return the total number of seconds between the start and 
        # end interval
        return (self._end - self._start).total_seconds()

    def fps(self):
        # compute the (approximate) frames per second
        return self._numFrames / self.elapsed()

class WebCamVideoStream:
    def __init__(self, src=0):
        # initialize the video camera stream and read the first frame 
        # from the stream
        self.stream = cv2.VideoCapture(src)
        (self.grabbed, self.frame) = self.stream.read()

        # initialize the variable used to inidicate if the thread 
        # should be stopped
        self.stopped = False

    def start(self):
        # start the thread to read frames from the video stream
        Thread(target=self.update, args=()).start()
        return self

    def update(self):
        # keep looping infinitely until the thread is stopped
        while True:
            # if the thread indicator variable is set, stop the thread
            if self.stopped:
                return
            # otherwise read the next frame from the stream
            (self.grabbed, self.frame) = self.stream.read()

    def read(self):
        # return the frame most recently read
        return self.frame

    def stop(self):
        # indicate that the thread should be stopped
        self.stopped = True

def main():
    # SLOW VERSION
    cap = cv2.VideoCapture(0)
    #cap.set(cv2.CAP_PROP_FRAME_WIDTH, 600)
    #cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)
    # stream = WebCamVideoStream(src=0).start()
    i = 0
    while True:
        ret, frame = cap.read()
        #frame = stream.read()
        frame = cv2.resize(frame, (320,280))
        sio.emit('stream_request', {'image': encode_image(frame)})
    #stream.stop()
    cap.release()
    #cv2.destroyAllWindows()

if __name__ == "__main__":
    sio.connect('http://127.0.0.1:5000',namespaces=['/local'])
    main()
whikwon commented 4 years ago

I'm sorry but I can't find the difference in code you shared compared to mine. Could you let me know?

By the way, I found the way for real-time streaming based on your repository.

app.py

import base64
import time
import cv2
from flask import Flask, render_template, request, Response
from flask_socketio import SocketIO, emit
import eventlet

eventlet.monkey_patch()
app = Flask(__name__)
socketio = SocketIO(app, logger=True, async_mode='eventlet')

@socketio.on('connect', namespace='/web')
def connect_web():
    print('[INFO] Web client connected: {}'.format(request.sid))

@socketio.on('disconnect', namespace='/web')
def disconnect_web():
    print('[INFO] Web client disconnected: {}'.format(request.sid))

@socketio.on('connect', namespace='/local')
def connect_cv():
    print('[INFO] CV client connected: {}'.format(request.sid))

@socketio.on('disconnect', namespace='/local')
def disconnect_cv():
    print('[INFO] CV client disconnected: {}'.format(request.sid))

@app.route('/')
def index():
    return render_template('index.html', async_mode=socketio.async_mode)

@socketio.on('inc_request', namespace='/web')
def inc_request(message):
    message += 'a'
    socketio.emit('inc_response', message, namespace='/web')

@socketio.on('stream_request', namespace='/web')
def video_generator():
    import imagezmq
    subscriber = imagezmq.ImageHub('tcp://127.0.0.1:5555', REQ_REP=False)
    while True:
        camera_id, frame = subscriber.recv_image()
        frame = cv2.imencode('.jpg', frame)[1].tobytes()
        frame = base64.encodebytes(frame).decode('utf-8')
        frame = f"data:image/jpeg;base64,{frame}"
        socketio.emit('stream_response', {'image': frame}, namespace='/web')
        socketio.sleep(0)

if __name__ == '__main__':
    socketio.run(app, host="127.0.0.1", port=5000, debug=True)

index.html

<html>
<head>
    <title>SocketIO</title>
    <script src="//cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js" integrity="sha256-yr4fRk/GU1ehYJPAs8P4JlTgu0Hdsp4ZKrx8bDEDC3I=" crossorigin="anonymous"></script>
    <style>
        .container {
            display: grid;
            grid-template-rows: repeat(2, 1fr);
            grid-template-columns: repeat(2, 1fr);
        }
        .contents {
            background: white;
        }
        .images {
            background: #665c9c;
        }
    </style>
</head>
<body>
    <div class="container">
        <div class="images">
            <img id="streamed-image" src="">
        </div>
        <div class="contents">
            <button id='btn'>a</button>
        </div>
    </div>
    <script type="text/javascript" charset="utf-8">
        const namespace = '/web';
        const socket = io(namespace);

        document.addEventListener('DOMContentLoaded', () => {
            socket.on('stream_response', (msg) => {
                document.querySelector('#streamed-image').src = msg.image;
            });
            socket.on('inc_response', (msg) => {
                document.querySelector('#btn').textContent = msg; 
            });
            socket.emit('stream_request');
        });
        document.querySelector('#btn').addEventListener('click', () => {
            socket.emit('inc_request', document.querySelector('#btn').textContent);
        });
    </script>
</body>
</html>

camera.py

import cv2
import imagezmq

def main():
    publisher = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555', REQ_REP=False)
    cap = cv2.VideoCapture(0)
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, 600)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)
    i = 0
    while True:
        ret, frame = cap.read()
        publisher.send_image('camera', frame)
        # cv2.imshow('imagezmq', frame)
        if i % 10:
            print(i)
        i += 1

    cap.release()
    cv2.destroyAllWindows()

if __name__ == "__main__":
    main()
AhmedBhati commented 4 years ago

@whikwon I have just reduced the frame size and the delay and have attached the video of your code's output. Check it out videostreaming.zip

and bascially in the above attached code in camera.py you are sending frames to the 5555 port and then taking the frames from that port and sending over to 5000 which will create more delay

whikwon commented 4 years ago

I needed to separate camera for other usage. I've checked your video and works very well.

Thank you for your help. I'll try the experiment with your advise!

Zarvoira commented 3 years ago

hello guys im trying similiar thing,i want the client (raspberry pi) to send video to the remote server (aws), what should i do with @whikwon @AhmedBhati example?, im not quite sure, should i put the camera.py and run it on pi and put the flask server in aws??

Zarvoira commented 3 years ago

i want that my camera.py run on pi and send it over socket somehow and the server(aws) receive it on flask and

right now i have

camera.py where i get the picture from webcam and do some processing


#Modified by
#Date: 27.06.20
#Desc: This scrtipt is running a face recongition of a live webcam stream. This is a modifed
#code of the orginal Ageitgey (GitHub) face recognition demo to include multiple faces.
#Simply add the your desired 'passport-style' face to the 'profiles' folder.

import face_recognition
import cv2
import numpy as np
import os
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
ds_factor=0.6

#Store objects in array
known_person=[] #Name of person string
known_image=[] #Image object
known_face_encodings=[] #Encoding object

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

#Loop to add images in friends folder
for file in os.listdir("profiles"):
    try:
        #Extracting person name from the image filename eg: david.jpg
        known_person.append(file.replace(".jpg", ""))
        file=os.path.join("profiles/", file)
        known_image = face_recognition.load_image_file(file)
        #print("test")
        #print(face_recognition.face_encodings(known_image)[0])
        known_face_encodings.append(face_recognition.face_encodings(known_image)[0])
        #print(known_face_encodings)

    except Exception as e:
        pass

#print(len(known_face_encodings))
#print(known_person)

class VideoCamera(object):
    def __init__(self):
        self.video = cv2.VideoCapture(0)

    def __del__(self):
        self.video.release()

    def get_frame(self):
        success, image = self.video.read()

        process_this_frame = True

            # Resize frame of video to 1/4 size for faster face recognition processing
        small_frame = cv2.resize(image, (0, 0), fx=0.25, fy=0.25)

        # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
        rgb_small_frame = small_frame[:, :, ::-1]

       # Only process every other frame of video to save time
        if process_this_frame:
            # Find all the faces and face encodings in the current frame of video
            face_locations = face_recognition.face_locations(rgb_small_frame)
            face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

            global name_gui;
            #face_names = []
            for face_encoding in face_encodings:
                # See if the face is a match for the known face(s)
                matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
                name = "Unknown"

                #print(face_encoding)
                print(matches)

                face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
                best_match_index = np.argmin(face_distances)
                if matches[best_match_index]:
                    name = known_person[best_match_index]

                print(name)
                #print(face_locations)
                face_names.append(name)

                name_gui = name

        process_this_frame = not process_this_frame

# Display the results
        for (top, right, bottom, left), name in zip(face_locations, face_names):
            # Scale back up face locations since the frame we detected in was scaled to 1/4 size
            top *= 4
            right *= 4
            bottom *= 4
            left *= 4

            # Draw a box around the face
            cv2.rectangle(image, (left, top), (right, bottom), (255, 255, 255), 2)

            # Draw a label with a name below the face
            cv2.rectangle(image, (left, bottom - 35), (right, bottom), (255, 255, 255), cv2.FILLED)
            font = cv2.FONT_HERSHEY_DUPLEX
            cv2.putText(image, name_gui, (left + 10, bottom - 10), font, 1.0, (0, 0, 0), 1)

        ret, jpeg = cv2.imencode('.jpg', image)
        return jpeg.tobytes()

main.py where my flask server run


from flask import Flask, render_template, Response, request
from camera import VideoCamera
import time
import os

app = Flask(__name__)
#app = Flask(__name__, template_folder='/var/www/html/templates')

#background process happening without any refreshing

@app.route('/', methods=['GET', 'POST'])
def move():
    result = ""
    if request.method == 'POST':

        return render_template('index.html', res_str=result)

    return render_template('index.html')

def gen(camera):
    while True:
        frame = camera.get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')

@app.route('/video_feed')
def video_feed():
    return Response(gen(VideoCamera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':
    app.run(host='192.168.1.15', debug=True, threaded=True)

and the index.html where i display the video


<div class="main" id="newpost">
  <img  class="camera-bg" style="width: 100%; height:80%; background-attachment: fixed;" id="bg" class="center" src="{{ url_for('video_feed') }}">
  <!--<img  class="camera-bg" style="width: 100%; height:80%; background-attachment: fixed;" id="bg" class="center" src="https://www.psdbox.com/wp-content/uploads/2011/01/security-camera-photoshop-effect.jpg">-->

</div>
AhmedBhati commented 3 years ago

@Zarvoira I haven't used flask with amazon aws but if you want to send a video from client to remote server try using python-socketio, wherein the client will take the video feed and send it to the server and server will display the video, I have tried this and it worked for only live video feeds it wasn't able to send saved video feeds.

Zarvoira commented 3 years ago

hey can u give me a sample code how to do that @AhmedBhati

AhmedBhati commented 3 years ago

I'll upload the code in Github repo by this week end, will share the link of the repository @Zarvoira

AhmedBhati commented 3 years ago

@Zarvoira go through the Flask SocketIO repository, the demo video as well as the code is present in it.

Zarvoira commented 3 years ago

thank you @AhmedBhati you should make youtube videos about this i think there is alot of people want to see these stuff!

AhmedBhati commented 3 years ago

@Zarvoira thanks for the suggestion, if you face any difficulty ping me back or raise an issue, would be happy to help.

slowpoison752 commented 3 years ago

@miguelgrinberg i have a flask app running, it takes video stream from client webcam, processes it and creates a log file at server end. It works fine for a single client. Now i want to be able to server 1000's of client if and when needed, what will be the most efficient way to go about this? i am not sending processed video to client end, client system will only have its own live stream. I tried it with 2 client and the video streams got scrambled. Why is this so? Is this an issue with flask?

miguelgrinberg commented 3 years ago

@slowpoison752 The only reason I can think of for the video from the two clients getting mixed up is that this is a bug in your application. Nothing in Flask or Flask-SocketIO can cause that.