sherlockchou86 / VideoPipe

跨平台的视频结构化(视频分析)框架,觉得有帮助的请给个星星 : ) 。**VideoPipe下一版本正在开发中,在保证跨平台、易上手的前提下,预计性能直逼deepstream等各硬件平台官方框架**。
Apache License 2.0
1.15k stars 163 forks source link

can not open mp4 file #54

Open kimsjpk1 opened 2 weeks ago

kimsjpk1 commented 2 weeks ago

i tried to video restoration on mp4 file

error message is this

[2024-06-12 09:58:33.606][Warn ][7fcbbca62000][/root/VideoPipe/nodes/vp_file_src_node.cpp:47] [file_src_0] open file failed, try again...

i think open mp4 file using ffmpeg? open mp4 file by Gstreamer? what is it?

import cv2 print(cv2.getBuildInformation())

Video I/O: FFMPEG: NO avcodec: YES (58.134.100) avformat: YES (58.76.100) avutil: YES (56.70.100) swscale: YES (5.9.100) avresample: NO GStreamer: YES (1.20.3) v4l/v4l2: YES (linux/videodev2.h)

my code is it.

`#include "../nodes/vp_file_src_node.h"

include "../nodes/infers/vp_openpose_detector_node.h"

include "../nodes/osd/vp_pose_osd_node.h"

include "../nodes/vp_screen_des_node.h"

include "../nodes/infers/vp_restoration_node.h"

include "../utils/analysis_board/vp_analysis_board.h"

int main() { VP_SET_LOG_LEVEL(vp_utils::vp_log_level::INFO); VP_LOGGER_INIT();

// create nodes
auto file_src_0 = std::make_shared<vp_nodes::vp_file_src_node>("file_src_0", 0, "./vp_data/test_video/yoga.mp4");
//auto openpose_detector = std::make_shared<vp_nodes::vp_openpose_detector_node>("openpose_detector", "./vp_data/models/openpose/pose/body_25_pose_iter_584000.caffemodel", "./vp_data/models/openpose/pose/body_25_pose_deploy.prototxt", "", 368, 368, 1, 0, 0.1, vp_objects::vp_pose_type::body_25);
auto restoration_node = std::make_shared<vp_nodes::vp_restoration_node>("restoration_node", "./vp_data/models/restoration/RealESRGAN_x4plus.onnx");

//auto pose_osd_0 = std::make_shared<vp_nodes::vp_pose_osd_node>("pose_osd_0");
auto screen_osd_0 = std::make_shared<vp_nodes::vp_pose_osd_node>("pose_osd_0");
auto screen_des_0 = std::make_shared<vp_nodes::vp_screen_des_node>("screen_des_0", 0);

// construct pipeline
restoration_node->attach_to({file_src_0});
//pose_osd_0->attach_to({openpose_detector});
screen_osd_0->attach_to({restoration_node});
screen_des_0->attach_to({screen_osd_0});

file_src_0->start();

// for debug purpose
vp_utils::vp_analysis_board board({file_src_0});
board.display();

}`

kimsjpk1 commented 2 weeks ago

i solve read mp4 file when operate openpose_sample

but it can't read RealESRGAN_x4plus.onnx

pth to onnx convert code ` from basicsr.archs.rrdbnet_arch import RRDBNet from basicsr.utils.download_util import load_file_from_url import os import torch from PIL import Image import numpy as np import argparse from torchvision.transforms import ToTensor

model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) netscale = 4 file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth']

model_path = os.path.join('weights', 'RealESRGAN_x4plus' + '.pth') if not os.path.isfile(model_path): ROOT_DIR = os.path.dirname(os.path.abspath(file)) for url in file_url:

model_path will be updated

    model_path = load_file_from_url(
        url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None)

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print('running on device ' + str(device))

exporter settings

parser = argparse.ArgumentParser()

parser.add_argument('--model_in', type=str, default='super_resolution.pytorch')

parser.add_argument('--model_out', type=str, default='RealESRGAN_x4plus.onnx') parser.add_argument('--image', type=str, required=False, help='input image to use', default='test0.jpg')

opt = parser.parse_args()

load the image

img = Image.open(opt.image) img_to_tensor = ToTensor()

input = img_to_tensor(img).view(1, -1, img.size[1], img.size[0]).to(device)

input = img_to_tensor(img).view(1, -1, img.size[1], img.size[0])

model = torch.load(os.path.join('weights', 'RealESRGAN_x4plus' + '.pth')).to(device)

model.load_state_dict(torch.load(os.path.join('weights', 'RealESRGAN_x4plus' + '.pth'))['params_ema'], strict=True)

export the model

input_names = [ "input_0" ] output_names = [ "output_0" ]

print('exporting model to ONNX...') torch.onnx.export(model, input, opt.model_out, verbose=True, input_names=input_names, output_names=output_names) print('model exported to {:s}'.format(opt.model_out)) `

error message is it

[2024-06-13 13:41:09.605][Info ] [file_src_0] [filesrc location=/opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! appsink] [2024-06-13 13:41:09.608][Warn ] [restoration_node] cv::dnn::readNet load network failed! [2024-06-13 13:41:09.814][Info ] [screen_des_0_ori] [appsrc ! videoconvert ! videoscale ! textoverlay text=screen_des_0_ori halignment=left valignment=top font-desc='Sans,16' shaded-background=true ! timeoverlay halignment=right valignment=top font-desc='Sans,16' shaded-background=true ! queue ! fpsdisplaysink video-sink=ximagesink sync=false] [2024-06-13 13:41:09.814][Info ] [screen_des_0] [appsrc ! videoconvert ! videoscale ! textoverlay text=screen_des_0 halignment=left valignment=top font-desc='Sans,16' shaded-background=true ! timeoverlay halignment=right valignment=top font-desc='Sans,16' shaded-background=true ! queue ! fpsdisplaysink video-sink=ximagesink sync=false] [2024-06-13 13:41:09.815][Info ] [file_des_0] [appsrc ! videoconvert ! %s bitrate=%d ! mp4mux ! filesink location=%s] [2024-06-13 13:41:09.815][Info ] ############# pipe check summary ############## total layers: 3 layer index, node names 1 file_src_0, 2 restoration_node, 3 screen_des_0,file_des_0,screen_des_0_ori, ############# pipe check summary ##############

(openpose_sample:10): dbind-WARNING **: 05:41:09.840: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-xkbDkfgXON: Connection refused

(openpose_sample:10): GStreamer-WARNING **: 05:41:09.840: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(openpose_sample:10): GStreamer-WARNING **: 05:41:10.502: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(openpose_sample:10): GStreamer-WARNING **: 05:41:10.506: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory

(openpose_sample:10): GStreamer-WARNING **: 05:41:10.507: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory

execution screen shot is it

osd_image

how can i solve it? thanks in advance.

wenqifu commented 2 weeks ago

I ran into similar open file failed issue when I'm trying to run lane_detect_sample. Did you figure out the cause?

kimsjpk1 commented 2 weeks ago

i solve read mp4 file to operate sh file sh /opt/nvidia/deepstream/deepstream-7.0/user_addtional_install.sh in deepstream docker and i solve read onnx file to make onnx to set mp4 resolution for example (1280, 720, 3) i write this in my naver blog but written in hangul you need chrome https://blog.naver.com/kimsjpk/223480168608

and i want to know how can operate videopipe in release mode. i think build videopipe default debug mode. so when i operate video restoration model it drops 11~12 fps.

thanks in advance. plz help me

sherlockchou86 commented 2 weeks ago

you need improve your hardware such as GPUs to improve your fps, not build mode(just modify build mode in your cmakelists.txt)