Closed Brokoth closed 3 years ago
See #143.
See #143.
This suggestion has worked. Thank you. I decided to use the ffmpeg library to save the video into individual images on my android device, and do my operations on those images so as to bypass the need of using cv2.VideoCapture() method and all the headache it brings. Here is the edited script below:
import os
import cv2
import os.path
import requests
from pytube import YouTube
import numpy as np
from com.arthenica.mobileffmpeg import FFmpeg
from com.arthenica.mobileffmpeg import FFprobe
def vid_saving(url1, url2, url3, url4, frame_gap, initial_path):
cars_xml_url = \
"https://github.com/Brokoth/TrafficAppData/blob/main/Vehicle%20and%20pedestrain%20detection/cars.xml?raw=true"
cars_xml_file = requests.get(cars_xml_url).content
with open(initial_path + '/cars.xml', 'wb') as file:
file.write(cars_xml_file)
cars_classifier = cv2.CascadeClassifier(initial_path + '/cars.xml')
count = 1
lane1_traffic_density_values = []
lane2_traffic_density_values = []
lane3_traffic_density_values = []
lane4_traffic_density_values = []
while count < 5:
if count == 1:
yt = YouTube(url1)
vid_length_in_seconds = yt.length
stream = yt.streams.filter(file_extension='mp4').first()
stream.download(output_path=initial_path, filename='vid1')
path = initial_path
path += '/'
path += 'vid1'
path += '.mp4'
FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
os.remove(path)
image_read_counter = 1
while image_read_counter < int(vid_length_in_seconds * frame_gap):
str_image_read_counter = '%02d' % image_read_counter
image_path = initial_path + '/image-' + str_image_read_counter + '.png'
img = cv2.imread(image_path)
if img is not None:
blur = cv2.blur(img, (3, 3))
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
cars = cars_classifier.detectMultiScale(gray)
cars_count = 0
for (x, y, w, h) in cars:
cars_count = cars_count + 1
lane1_traffic_density_values.append(cars_count)
else:
break
os.remove(image_path)
image_read_counter = image_read_counter + 1
count = count + 1
elif count == 2:
yt = YouTube(url2)
vid_length_in_seconds = yt.length
stream = yt.streams.filter(file_extension='mp4').first()
stream.download(output_path=initial_path, filename='vid2')
path = initial_path
path += '/'
path += 'vid2'
path += '.mp4'
FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
os.remove(path)
image_read_counter = 1
while image_read_counter < int(vid_length_in_seconds * frame_gap):
str_image_read_counter = '%02d' % image_read_counter
image_path = initial_path + '/image-' + str_image_read_counter + '.png'
img = cv2.imread(image_path)
if img is not None:
blur = cv2.blur(img, (3, 3))
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
cars = cars_classifier.detectMultiScale(gray)
cars_count = 0
for (x, y, w, h) in cars:
cars_count = cars_count + 1
lane2_traffic_density_values.append(cars_count)
else:
break
os.remove(image_path)
image_read_counter = image_read_counter + 1
count = count + 1
elif count == 3:
yt = YouTube(url3)
vid_length_in_seconds = yt.length
stream = yt.streams.filter(file_extension='mp4').first()
stream.download(output_path=initial_path, filename='vid3')
path = initial_path
path += '/'
path += 'vid3'
path += '.mp4'
FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
os.remove(path)
image_read_counter = 1
while image_read_counter < int(vid_length_in_seconds * frame_gap):
str_image_read_counter = '%02d' % image_read_counter
image_path = initial_path + '/image-' + str_image_read_counter + '.png'
img = cv2.imread(image_path)
if img is not None:
blur = cv2.blur(img, (3, 3))
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
cars = cars_classifier.detectMultiScale(gray)
cars_count = 0
for (x, y, w, h) in cars:
cars_count = cars_count + 1
lane3_traffic_density_values.append(cars_count)
else:
break
os.remove(image_path)
image_read_counter = image_read_counter + 1
count = count + 1
else:
yt = YouTube(url4)
vid_length_in_seconds = yt.length
stream = yt.streams.filter(file_extension='mp4').first()
stream.download(output_path=initial_path, filename='vid4')
path = initial_path
path += '/'
path += 'vid4'
path += '.mp4'
FFmpeg.execute("-i " + path + " -r " + frame_gap + " -f image2 " + initial_path + "/image-%2d.png")
os.remove(path)
image_read_counter = 1
while image_read_counter < int(vid_length_in_seconds * frame_gap):
str_image_read_counter = '%02d' % image_read_counter
image_path = initial_path + '/image-' + str_image_read_counter + '.png'
img = cv2.imread(image_path)
if img is not None:
blur = cv2.blur(img, (3, 3))
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
cars = cars_classifier.detectMultiScale(gray)
cars_count = 0
for (x, y, w, h) in cars:
cars_count = cars_count + 1
lane4_traffic_density_values.append(cars_count)
else:
break
os.remove(image_path)
image_read_counter = image_read_counter + 1
count = count + 1
# print(os.listdir(initial_path))
return str(lane1_traffic_density_values) + '/' + str(lane2_traffic_density_values) + '/' + str(lane3_traffic_density_values) + '/' + str(lane4_traffic_density_values)
The code above helped me, but I decided to implement a solution where the ffmpeg package was used in java instead, so that an audioclip could be converted into a wav file before python loaded it with librosa. My implementation below in case it helps:
String dirPath = getFilesDir().getAbsolutePath() + "/oneTimeTestDatabase"; File projDir = new File(dirPath); if (!projDir.exists()) projDir.mkdirs(); audioSavePathInDevice = dirPath + "/audioFile.m4a"; wavFile = dirPath + "/wavFile.wav"; . . . FFmpeg.execute("-i " + audioSavePathInDevice + " -y " + wavFile);
Hi,
I'm new to android studio. I want to develop an android app which can do motion detection. The motion detection algorithm is in python. So, my question is, if I want to use mobile-ffmpeg to convert video to frames and then do motion detection (planning to use some of the python codes above for getting frames), how can i pass the video to this python code? The video loading portion is done on Java.
I know chaquopy can be used for the communication between java and python, but I don't know how to pass video to the python code above.
If someone knows the answer, can you please help me.
@ashnaeldho Using chaquopy, send the path of the video on your phone to the python code.
if (!Python.isStarted())
Python.start(new AndroidPlatform(c));
Python python = Python.getInstance();
PyObject pyObj = python.getModule("python_script");
PyObject returnedData = pyObj.callAttr(path);
You can use this method to get the root path:
String path = YourActivity.this.getFilesDir().toString()
Concatenate the remainder of the video's path at the end of the path
variable. Then you can refer to the python code above to see how you would handle the parameters you passed.
The code above helped me, but I decided to implement a solution where the ffmpeg package was used in java instead, so that an audioclip could be converted into a wav file before python loaded it with librosa. My implementation below in case it helps:
String dirPath = getFilesDir().getAbsolutePath() + "/oneTimeTestDatabase"; File projDir = new File(dirPath); if (!projDir.exists()) projDir.mkdirs(); audioSavePathInDevice = dirPath + "/audioFile.m4a"; wavFile = dirPath + "/wavFile.wav"; . . . FFmpeg.execute("-i " + audioSavePathInDevice + " -y " + wavFile);
Thank you, it worked fine.
@Brokoth Hi, I want to use mobileffmpeg seeing your code. how could I use mobileffmpeg like from com.arthenica.mobileffmpeg import FFmpeg? I couldn't use that code even after download by this code dependencies { implementation 'com.arthenica:mobile-ffmpeg-full:4.4' }
When using imageio to read youtube videos in python as shown below:
I receive the following error in my android studio logcat:
However, the python code works just fine on my pc. The error only occurs when i try to run the script with chaquopy. How can i install ffmpeg on my system, or set the IMAGEIO_FFMPEG_EXE environment variable? Below are the installed packages in build.gradle: