chaquo / chaquopy

Chaquopy: the Python SDK for Android
https://chaquo.com/chaquopy/
MIT License
833 stars 131 forks source link

RuntimeError: No ffmpeg exe could be found. #435

Closed Brokoth closed 3 years ago

Brokoth commented 3 years ago

When using imageio to read youtube videos in python as shown below:

import cv2
import imageio
from pytube import YouTube
import numpy as np

def car_detection(url, frame_gap, initial_path, name):
    cars_classifier = cv2.CascadeClassifier(
        'cars.xml')
    yt = YouTube(url)
    stream = yt.streams.filter(file_extension='mp4').first()
    stream.download(output_path=initial_path, filename=name)
    path = initial_path
    path += '/'
    path += name
    path += '.mp4'

    reader = imageio.get_reader(path)
    frame_number_counter = 0
    traffic_density_values = []
    for frame in reader:
        frame_number_counter = frame_number_counter + 1
        if frame is not None:
            if frame_number_counter == 1 or frame_number_counter % frame_gap == 0:
                blur = cv2.blur(np.float32(frame), (3, 3))
                gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
                gray = np.array(gray, dtype='uint8')
                cars = cars_classifier.detectMultiScale(gray)
                count = 0
                for (x, y, w, h) in cars:
                    count = count + 1
                traffic_density_values.append(count)
                key = cv2.waitKey(1)
                if key == 27:
                    break
        else:
            break
    print(traffic_density_values)
    return traffic_density_values

I receive the following error in my android studio logcat:

2021-01-23 01:13:34.022 22417-22570/Brian.Okoth.trafficapp E/AndroidRuntime: FATAL EXCEPTION: Thread-2
    Process: Brian.Okoth.trafficapp, PID: 22417
    com.chaquo.python.PyException: RuntimeError: No ffmpeg exe could be found. Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
        at <python>.imageio_ffmpeg._utils.get_ffmpeg_exe(_utils.py:49)
        at <python>.imageio_ffmpeg._io._get_exe(_io.py:19)
        at <python>.imageio_ffmpeg._io.read_frames(_io.py:140)
        at <python>.imageio.plugins.ffmpeg._initialize(ffmpeg.py:468)
        at <python>.imageio.plugins.ffmpeg._open(ffmpeg.py:323)
        at <python>.imageio.core.format.__init__(format.py:221)
        at <python>.imageio.core.format.get_reader(format.py:170)
        at <python>.imageio.core.functions.get_reader(functions.py:186)
        at <python>.car_detection_and_counter_python_script.car_detection(car_detection_and_counter_python_script.py:18)
        at <python>.chaquopy_java.call(chaquopy_java.pyx:380)
        at <python>.chaquopy_java.Java_com_chaquo_python_PyObject_callAttrThrowsNative(chaquopy_java.pyx:352)
        at com.chaquo.python.PyObject.callAttrThrowsNative(Native Method)
        at com.chaquo.python.PyObject.callAttrThrows(PyObject.java:232)
        at com.chaquo.python.PyObject.callAttr(PyObject.java:221)
        at com.example.trafficapp.car_detection_thread.run(car_detection_thread.java:46)

However, the python code works just fine on my pc. The error only occurs when i try to run the script with chaquopy. How can i install ffmpeg on my system, or set the IMAGEIO_FFMPEG_EXE environment variable? Below are the installed packages in build.gradle:

pip {
                install "sklearn"
                install "pandas"
                install "opencv-python"
                install "pytube==10.4.1"
                install "imageio"
                install "imageio-ffmpeg"
                install "numpy"
                install "ffmpeg"
                install "ffmpeg-python"

            }
mhsmith commented 3 years ago

See #143.

Brokoth commented 3 years ago

See #143.

This suggestion has worked. Thank you. I decided to use the ffmpeg library to save the video into individual images on my android device, and do my operations on those images so as to bypass the need of using cv2.VideoCapture() method and all the headache it brings. Here is the edited script below:

import os
import cv2
import os.path
import requests
from pytube import YouTube
import numpy as np
from com.arthenica.mobileffmpeg import FFmpeg
from com.arthenica.mobileffmpeg import FFprobe

def vid_saving(url1, url2, url3, url4, frame_gap, initial_path):
    cars_xml_url = \
        "https://github.com/Brokoth/TrafficAppData/blob/main/Vehicle%20and%20pedestrain%20detection/cars.xml?raw=true"
    cars_xml_file = requests.get(cars_xml_url).content
    with open(initial_path + '/cars.xml', 'wb') as file:
        file.write(cars_xml_file)
    cars_classifier = cv2.CascadeClassifier(initial_path + '/cars.xml')
    count = 1
    lane1_traffic_density_values = []
    lane2_traffic_density_values = []
    lane3_traffic_density_values = []
    lane4_traffic_density_values = []
    while count < 5:
        if count == 1:
            yt = YouTube(url1)
            vid_length_in_seconds = yt.length
            stream = yt.streams.filter(file_extension='mp4').first()
            stream.download(output_path=initial_path, filename='vid1')
            path = initial_path
            path += '/'
            path += 'vid1'
            path += '.mp4'
            FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
            os.remove(path)
            image_read_counter = 1
            while image_read_counter < int(vid_length_in_seconds * frame_gap):
                str_image_read_counter = '%02d' % image_read_counter
                image_path = initial_path + '/image-' + str_image_read_counter + '.png'
                img = cv2.imread(image_path)
                if img is not None:
                    blur = cv2.blur(img, (3, 3))
                    gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
                    cars = cars_classifier.detectMultiScale(gray)
                    cars_count = 0
                    for (x, y, w, h) in cars:
                        cars_count = cars_count + 1
                    lane1_traffic_density_values.append(cars_count)
                else:
                    break
                os.remove(image_path)
                image_read_counter = image_read_counter + 1
            count = count + 1
        elif count == 2:
            yt = YouTube(url2)
            vid_length_in_seconds = yt.length
            stream = yt.streams.filter(file_extension='mp4').first()
            stream.download(output_path=initial_path, filename='vid2')
            path = initial_path
            path += '/'
            path += 'vid2'
            path += '.mp4'
            FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
            os.remove(path)
            image_read_counter = 1
            while image_read_counter < int(vid_length_in_seconds * frame_gap):
                str_image_read_counter = '%02d' % image_read_counter
                image_path = initial_path + '/image-' + str_image_read_counter + '.png'
                img = cv2.imread(image_path)
                if img is not None:
                    blur = cv2.blur(img, (3, 3))
                    gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
                    cars = cars_classifier.detectMultiScale(gray)
                    cars_count = 0
                    for (x, y, w, h) in cars:
                        cars_count = cars_count + 1
                    lane2_traffic_density_values.append(cars_count)
                else:
                    break
                os.remove(image_path)
                image_read_counter = image_read_counter + 1
            count = count + 1
        elif count == 3:
            yt = YouTube(url3)
            vid_length_in_seconds = yt.length
            stream = yt.streams.filter(file_extension='mp4').first()
            stream.download(output_path=initial_path, filename='vid3')
            path = initial_path
            path += '/'
            path += 'vid3'
            path += '.mp4'
            FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
            os.remove(path)
            image_read_counter = 1
            while image_read_counter < int(vid_length_in_seconds * frame_gap):
                str_image_read_counter = '%02d' % image_read_counter
                image_path = initial_path + '/image-' + str_image_read_counter + '.png'
                img = cv2.imread(image_path)
                if img is not None:
                    blur = cv2.blur(img, (3, 3))
                    gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
                    cars = cars_classifier.detectMultiScale(gray)
                    cars_count = 0
                    for (x, y, w, h) in cars:
                        cars_count = cars_count + 1
                    lane3_traffic_density_values.append(cars_count)
                else:
                    break
                os.remove(image_path)
                image_read_counter = image_read_counter + 1
            count = count + 1
        else:
            yt = YouTube(url4)
            vid_length_in_seconds = yt.length
            stream = yt.streams.filter(file_extension='mp4').first()
            stream.download(output_path=initial_path, filename='vid4')
            path = initial_path
            path += '/'
            path += 'vid4'
            path += '.mp4'
            FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
            os.remove(path)
            image_read_counter = 1
            while image_read_counter < int(vid_length_in_seconds * frame_gap):
                str_image_read_counter = '%02d' % image_read_counter
                image_path = initial_path + '/image-' + str_image_read_counter + '.png'
                img = cv2.imread(image_path)
                if img is not None:
                    blur = cv2.blur(img, (3, 3))
                    gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
                    cars = cars_classifier.detectMultiScale(gray)
                    cars_count = 0
                    for (x, y, w, h) in cars:
                        cars_count = cars_count + 1
                    lane4_traffic_density_values.append(cars_count)
                else:
                    break
                os.remove(image_path)
                image_read_counter = image_read_counter + 1
            count = count + 1
    # print(os.listdir(initial_path))
    return str(lane1_traffic_density_values) + '/' + str(lane2_traffic_density_values) + '/' + str(lane3_traffic_density_values) + '/' + str(lane4_traffic_density_values)
RipPennell commented 3 years ago

The code above helped me, but I decided to implement a solution where the ffmpeg package was used in java instead, so that an audioclip could be converted into a wav file before python loaded it with librosa. My implementation below in case it helps:

String dirPath = getFilesDir().getAbsolutePath() + "/oneTimeTestDatabase"; File projDir = new File(dirPath); if (!projDir.exists()) projDir.mkdirs(); audioSavePathInDevice = dirPath + "/audioFile.m4a"; wavFile = dirPath + "/wavFile.wav"; . . . FFmpeg.execute("-i " + audioSavePathInDevice + " -y " + wavFile);

ashnaeldho commented 3 years ago

Hi,

I'm new to android studio. I want to develop an android app which can do motion detection. The motion detection algorithm is in python. So, my question is, if I want to use mobile-ffmpeg to convert video to frames and then do motion detection (planning to use some of the python codes above for getting frames), how can i pass the video to this python code? The video loading portion is done on Java.

I know chaquopy can be used for the communication between java and python, but I don't know how to pass video to the python code above.

If someone knows the answer, can you please help me.

Brokoth commented 3 years ago

@ashnaeldho Using chaquopy, send the path of the video on your phone to the python code.

 if (!Python.isStarted())
     Python.start(new AndroidPlatform(c));
 Python python = Python.getInstance();
 PyObject pyObj = python.getModule("python_script");
 PyObject returnedData = pyObj.callAttr(path);

You can use this method to get the root path:

String path = YourActivity.this.getFilesDir().toString()

Concatenate the remainder of the video's path at the end of the path variable. Then you can refer to the python code above to see how you would handle the parameters you passed.

msabbir42 commented 1 year ago

The code above helped me, but I decided to implement a solution where the ffmpeg package was used in java instead, so that an audioclip could be converted into a wav file before python loaded it with librosa. My implementation below in case it helps:

String dirPath = getFilesDir().getAbsolutePath() + "/oneTimeTestDatabase"; File projDir = new File(dirPath); if (!projDir.exists()) projDir.mkdirs(); audioSavePathInDevice = dirPath + "/audioFile.m4a"; wavFile = dirPath + "/wavFile.wav"; . . . FFmpeg.execute("-i " + audioSavePathInDevice + " -y " + wavFile);

Thank you, it worked fine.

ghshgd commented 8 months ago

@Brokoth Hi, I want to use mobileffmpeg seeing your code. how could I use mobileffmpeg like from com.arthenica.mobileffmpeg import FFmpeg? I couldn't use that code even after download by this code dependencies { implementation 'com.arthenica:mobile-ffmpeg-full:4.4' }