whitphx / streamlit-webrtc

Real-time video and audio streams over the network, with Streamlit.
https://discuss.streamlit.io/t/new-component-streamlit-webrtc-a-new-way-to-deal-with-real-time-media-streams/8669
MIT License
1.27k stars 176 forks source link

Why error : AttributeError: 'NoneType' object has no attribute 'call_exception_handler' #1618

Closed YudhaDevelops closed 1 month ago

YudhaDevelops commented 1 month ago

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501 Network URL: http://192.168.1.3:8501

Exception in callback Transaction.retry() handle: <TimerHandle when=1014459.718 Transaction.retry()> Traceback (most recent call last): File "C:\Users*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1054, in sendto self._sock.sendto(data, addr) AttributeError: 'NoneType' object has no attribute 'sendto'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, self._args) File "C:\Users\\miniconda3\envs\conda39\lib\site-packages\aioice\stun.py", line 312, in retry self.protocol.send_stun(self.request, self.addr) File "C:\Users*\miniconda3\envs\conda39\lib\site-packages\aioice\ice.py", line 266, in send_stun self.transport.sendto(bytes(message), addr) File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1064, in sendto self._fatal_error( File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 711, in _fatal_error self._loop.call_exception_handler({ AttributeError: 'NoneType' object has no attribute 'call_exception_handler' Exception in callback Transaction.retry() handle: <TimerHandle when=1014461.453 Transaction.retry()> Traceback (most recent call last): File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1054, in sendto self._sock.sendto(data, addr) AttributeError: 'NoneType' object has no attribute 'sendto'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, self._args) File "C:\Users\\miniconda3\envs\conda39\lib\site-packages\aioice\stun.py", line 312, in retry self.protocol.send_stun(self.request, self.addr) File "C:\Users*\miniconda3\envs\conda39\lib\site-packages\aioice\ice.py", line 266, in send_stun self.transport.sendto(bytes(message), addr) File "C:\Users*\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1064, in sendto self._fatal_error( File "C:\Users*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 711, in _fatal_error self._loop.call_exception_handler({ AttributeError: 'NoneType' object has no attribute 'call_exception_handler'

my code

import threading
from typing import Union

import av
import cv2
import numpy as np
import streamlit as st
import tensorflow as tf
import os
import argparse
import sys
import time
import importlib.util
from streamlit_webrtc import (
    VideoProcessorBase,
    VideoTransformerBase,
    WebRtcMode,
    webrtc_streamer,
    RTCConfiguration,
)

RTC_CONFIGURATION = RTCConfiguration(
    {"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}
)

PATH_TO_MODEL = "./models/detectObject/model.tflite"
PATH_TO_LABELS = "./models/detectObject/labels.txt"

@st.cache_resource
def load_tf_lite_model():
    try:
        interpreter = tf.lite.Interpreter(model_path=PATH_TO_MODEL)
        interpreter.allocate_tensors()

        return interpreter
    except ValueError as ve:
        print("Error loading the TensorFlow Lite model:", ve)
        exit()

@st.cache_resource
def load_labels():
    with open(PATH_TO_LABELS, "r") as f:
        labels = [line.strip() for line in f.readlines()]
        return labels

def detect_capture(image):
    interpreter = load_tf_lite_model()
    labels = load_labels()

    input_details = interpreter.get_input_details()
    output_details = interpreter.get_output_details()
    height = input_details[0]["shape"][1]
    width = input_details[0]["shape"][2]

    float_input = input_details[0]["dtype"] == np.float32

    image_resized = cv2.resize(image, (width, height))
    input_data = np.expand_dims(image_resized, axis=0)

    input_mean = 127.5
    input_std = 127.5

    # Normalize pixel values
    if float_input:
        input_data = (np.float32(input_data) - input_mean) / input_std

    # Perform the actual detection
    interpreter.set_tensor(input_details[0]["index"], input_data)
    interpreter.invoke()

    # Retrieve detection results
    boxes = interpreter.get_tensor(output_details[1]["index"])[0]
    classes = interpreter.get_tensor(output_details[3]["index"])[0]
    scores = interpreter.get_tensor(output_details[0]["index"])[0]
    st.write("Kelas : {} \n"
          "Score : {} \n"
          "Boxes : {}".format(classes,scores,boxes))

    imH, imW, _ = image.shape

    for i in range(len(scores)):
        if(scores[i] > 0.05) and (scores[i] <= 1.0):
            ymin = int(max(1, (boxes[i][0] * imH)))
            xmin = int(max(1, (boxes[i][1] * imW)))
            ymax = int(min(imH, (boxes[i][2] * imH)))
            xmax = int(min(imW, (boxes[i][3] * imW)))

            cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (10, 255, 0), 2)

            # Draw label
            object_name = labels[int(classes[i])]
            label = "%s: %d%%" % (
                object_name,
                int(scores[i] * 100),
            )
            labelSize, baseLine = cv2.getTextSize(
                label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2
            )
            label_ymin = max(ymin, labelSize[1] + 10)
            cv2.rectangle(
                image,
                (xmin, label_ymin - labelSize[1] - 10),
                (xmin + labelSize[0], label_ymin + baseLine - 10),
                (255, 255, 255),
                cv2.FILLED,
            )
            cv2.putText(
                image,
                label,
                (xmin, label_ymin - 7),
                cv2.FONT_HERSHEY_SIMPLEX,
                0.7,
                (0, 0, 0),
                2,
            )
    print("Kelas : {} \n"
          "Score : {} \n"
          "Boxes : {}".format(classes,scores,boxes))
    return image

class VideoTransformer(VideoTransformerBase):
    frame_lock: threading.Lock
    out_image: Union[np.ndarray, None]

    def __init__(self) -> None:
        self.frame_lock = threading.Lock()
        self.out_image = None

    def transform(self, frame: av.VideoFrame) -> np.ndarray:
        interpreter = load_tf_lite_model()
        labels = load_labels()

        input_mean = 127.5
        input_std = 127.5

        # get model details
        input_details = interpreter.get_input_details()
        output_details = interpreter.get_output_details()
        height = input_details[0]["shape"][1]
        width = input_details[0]["shape"][2]
        float_input = input_details[0]["dtype"] == np.float32

        out_image = frame.to_ndarray(format="bgr24")

        imH, imW, _ = out_image.shape

        image_resized = cv2.resize(out_image, (width, height))
        input_data = np.expand_dims(image_resized, axis=0)

        # Normalize pixel values
        if float_input:
            input_data = (np.float32(input_data) - input_mean) / input_std

        # Perform the actual detection
        interpreter.set_tensor(input_details[0]["index"], input_data)
        interpreter.invoke()

        # Retrieve detection results
        boxes = interpreter.get_tensor(output_details[1]["index"])[0]
        classes = interpreter.get_tensor(output_details[3]["index"])[0]
        scores = interpreter.get_tensor(output_details[0]["index"])[0]

        for i in range(len(scores)):
            if (scores[i] > 0.05) and (scores[i] <= 1.0):
                ymin = int(max(1, (boxes[i][0] * imH)))
                xmin = int(max(1, (boxes[i][1] * imW)))
                ymax = int(min(imH, (boxes[i][2] * imH)))
                xmax = int(min(imW, (boxes[i][3] * imW)))

                cv2.rectangle(out_image, (xmin, ymin), (xmax, ymax), (10, 255, 0), 2)

                # Draw label
                object_name = labels[int(classes[i])]

                label = "%s: %d%%" % (
                    object_name,
                    int(scores[i] * 100),
                )

                labelSize, baseLine = cv2.getTextSize(
                    label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2
                )

                label_ymin = max(ymin, labelSize[1] + 10)

                cv2.rectangle(
                    out_image,
                    (xmin, label_ymin - labelSize[1] - 10),
                    (xmin + labelSize[0], label_ymin + baseLine - 10),
                    (255, 255, 255),
                    cv2.FILLED,
                )

                cv2.putText(
                    out_image,
                    label,
                    (xmin, label_ymin - 7),
                    cv2.FONT_HERSHEY_SIMPLEX,
                    0.7,
                    (0, 0, 0),
                    2,
                )

                with self.frame_lock:
                    self.out_image = out_image

        print("Shape : {}".format(out_image.shape))
        return out_image

def realtime_video_detection():
    info = st.empty()
    info.markdown("First, click on :blue['START'] to use webcam")
    ctx = webrtc_streamer(
        key="object_detection",
        mode=WebRtcMode.SENDRECV,
        rtc_configuration=RTC_CONFIGURATION,
        video_processor_factory=VideoTransformer,
        media_stream_constraints={"video": True, "audio": False},
        async_processing=True,
    )
    if ctx.video_transformer:
        info.markdown("Click on :blue['SNAPSHOT'] to take a picture")
        snap = st.button("SNAPSHOT")
        if snap:
            if ctx.video_transformer.out_image is not None:
                with ctx.video_transformer.frame_lock:
                    out_image = ctx.video_transformer.out_image.copy()

                st.write("Sebelum:")
                st.image(out_image, channels="BGR")
                image = detect_capture(out_image)
                st.write("Sesudah:")
                st.image(image, channels="BGR")

if __name__ == "__main__":
    realtime_video_detection()
YudhaDevelops commented 1 month ago

I am using python version 3.9.19

KKopilka commented 1 month ago

Hello! I have same error :( `During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Python311\Lib\site-packages\aioice\ice.py", line 797, in check_start response, addr = await pair.protocol.request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\aioice\ice.py", line 254, in request return await transaction.run() ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\site-packages\aioice\stun.py", line 301, in run self.retry() File "C:\Python311\Lib\site-packages\aioice\stun.py", line 312, in retry self.__protocol.send_stun(self.request, self.addr) File "C:\Python311\Lib\site-packages\aioice\ice.py", line 266, in send_stun self.transport.sendto(bytes(message), addr) File "C:\Python311\Lib\asyncio\selector_events.py", line 1206, in sendto self._fatal_error( File "C:\Python311\Lib\asyncio\selector_events.py", line 873, in _fatal_error self._loop.call_exception_handler({ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'call_exception_handler'`

JHFVR commented 1 month ago

Same - I only get it when I push my app to a web server (using cloud foundry here). locally it works just fine. implemented the TURN setup via Twillio as suggested

overheat commented 1 month ago

also Python11 and Python 12 on MacOS with conda env.

overheat commented 1 month ago

also Python11 and Python 12 on MacOS with conda env.

also on Ubuntu 24.04

KKopilka commented 1 month ago

I noticed that on streamlit==1.34.0 the camera doesn't work. I tried installing an earlier version and it worked fine

Hansson0728 commented 1 month ago

same here issues with 1.34, downgraded to 1.3 and it is working.. iam running streamlit in a container and also using a local docker turnserver working with 1.3 no problems.

whitphx commented 1 month ago

Thank you for the report. Streamlit updated the internals of its custom component API and it affected this extension which uses some hacks. Next release will fix it.

tginart commented 1 month ago

Hey @whitphx , I appreciate that you are patching this up!

In the meantime, is there any requirements.txt with correct versions of streamlit and other dependencies that we can use with the latest release?

whitphx commented 1 month ago

If you are using streamlit<=1.33.0, any version of streamlit-webrtc should work. If you are using streamlit>=1.34.0, plz specify streamlit-webrtc>=0.47.7.

YudhaDevelops commented 1 month ago

thank you @whitphx for helping me, and fixed the package, so i can finished my final report program