kkroening / ffmpeg-python

Python bindings for FFmpeg - with complex filtering support
Apache License 2.0
10.06k stars 892 forks source link

async not working for HEVC #422

Open jiequanz opened 4 years ago

jiequanz commented 4 years ago

When I used h264, async seems to work. However, when I used HEVC, the generated video seemed to be losing some frames and taking longer. Here is my code.

config = {'c:v':'hevc'}
config = {'c:v':'h264'}
process = (
    ffmpeg
    .input('pipe:', framerate=15, format='rawvideo', pix_fmt='bgr24', s='640x480')
    .output('videos/movie264.mp4', pix_fmt='nv21', **config)
    .overwrite_output()
    .run_async(pipe_stdin=True)
)

How should I fix it? Thanks!

Kotters commented 4 years ago

HEVC is a more complex codec with a higher cost in CPU resources. It's supposed to take longer. As for dropping frames, that is likely an issue with FFMPEG, your input video, or how you're invoking it. FFMPEG-Python is just a wrapper for FFMPEG and appears to be doing its job from the code you've provided.

jiequanz commented 4 years ago

config = {'c:v': 'hevc'}
config = {'c:v': 'h264'}
process = (
    ffmpeg
    .input('pipe:', framerate=15, format='rawvideo', pix_fmt='bgr24', s='640x480')
    .output('videos/movie264.mp4', pix_fmt='nv21', **config)
    .overwrite_output()
    .run_async(pipe_stdin=True)
)
# config = {'c:v': 'hevc'}
process_depth = (
    ffmpeg
    .input('pipe:', framerate=15, format='rawvideo', s='640x480')
    .output('videos/movie264_depth.mp4',  pix_fmt='ppm',**config)#pix_fmt='nv21'
    .overwrite_output()
    .run_async(pipe_stdin=True)
)

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 15)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 15)

# Start streaming
pipeline.start(config)
i=0

try:
    while i<18000:

        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))
    #     print(color_image.shape)
        import time
        start=time.time()
        process.stdin.write(
               color_image
               .astype(np.uint8)
               .tobytes()
            )  

        end = time.time()
        print('total processing time: ',end-start)

        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        cv2.waitKey(1)
    #         cv2.imwrite('images/image_'+str(i)+'.jpg',color_image )
        i+=1
finally:

    # Stop streaming
    pipeline.stop()
jiequanz commented 4 years ago

Here is how I write the code. Is there any suggestion?