basler / pypylon

The official python wrapper for the pylon Camera Software Suite
http://www.baslerweb.com
BSD 3-Clause "New" or "Revised" License
557 stars 207 forks source link

How to record video #113

Closed anarayan09 closed 7 months ago

anarayan09 commented 5 years ago

Using a Basler ac1920-25uc on Windows Python 3.7, I can capture and record images (using .py files like grab.py, guiimagewindow.py, save_image.py), but how do I acquire and record videos via Python Pypylon?

cs2r commented 5 years ago

check this. https://stackoverflow.com/questions/49782358/save-video-instead-of-saving-images-while-using-basler-camera-and-python

anarayan09 commented 5 years ago

Tried running the sample code and I get the following error:

AttributeError: module 'pypylon' has no attribute 'factory'

AnuArun3 commented 4 years ago

Even i am getting the same errror

AnuArun3 commented 4 years ago

Can anyone help out of this?

thiesmoeller commented 4 years ago

As a sample for a GEV camera this code shows:

Code to use the videowriter class is in the main at the end

import subprocess as sp
import os

### for demonstration of how to write video data
### this class is an excerpt from the project moviepy https://github.com/Zulko/moviepy.git moviepy/video/io/ffmpeg_writer.py
###
class FFMPEG_VideoWriter:
    """ A class for FFMPEG-based video writing.

    A class to write videos using ffmpeg. ffmpeg will write in a large
    choice of formats.

    Parameters
    -----------

    filename
      Any filename like 'video.mp4' etc. but if you want to avoid
      complications it is recommended to use the generic extension
      '.avi' for all your videos.

    size
      Size (width,height) of the output video in pixels.

    fps
      Frames per second in the output video file.

    codec
      FFMPEG codec. It seems that in terms of quality the hierarchy is
      'rawvideo' = 'png' > 'mpeg4' > 'libx264'
      'png' manages the same lossless quality as 'rawvideo' but yields
      smaller files. Type ``ffmpeg -codecs`` in a terminal to get a list
      of accepted codecs.

      Note for default 'libx264': by default the pixel format yuv420p
      is used. If the video dimensions are not both even (e.g. 720x405)
      another pixel format is used, and this can cause problem in some
      video readers.

    audiofile
      Optional: The name of an audio file that will be incorporated
      to the video.

    preset
      Sets the time that FFMPEG will take to compress the video. The slower,
      the better the compression rate. Possibilities are: ultrafast,superfast,
      veryfast, faster, fast, medium (default), slow, slower, veryslow,
      placebo.

    bitrate
      Only relevant for codecs which accept a bitrate. "5000k" offers
      nice results in general.

    withmask
      Boolean. Set to ``True`` if there is a mask in the video to be
      encoded.

    """

    def __init__(self, filename, size, fps, codec="libx264", audiofile=None,
                 preset="medium", bitrate=None, pixfmt="rgba",
                 logfile=None, threads=None, ffmpeg_params=None):

        if logfile is None:
            logfile = sp.PIPE

        self.filename = filename
        self.codec = codec
        self.ext = self.filename.split(".")[-1]

        # order is important
        cmd = [
            "ffmpeg-4.2.1-win64-static/bin/ffmpeg",
            '-y',
            '-loglevel', 'error' if logfile == sp.PIPE else 'info',
            '-f', 'rawvideo',
            '-vcodec', 'rawvideo',
            '-s', '%dx%d' % (size[1], size[0]),
            '-pix_fmt', pixfmt,
            '-r', '%.02f' % fps,
            '-i', '-', '-an',
        ]
        cmd.extend([
            '-vcodec', codec,
            '-preset', preset,
        ])
        if ffmpeg_params is not None:
            cmd.extend(ffmpeg_params)
        if bitrate is not None:
            cmd.extend([
                '-b', bitrate
            ])
        if threads is not None:
            cmd.extend(["-threads", str(threads)])

        if ((codec == 'libx264') and
                (size[0] % 2 == 0) and
                (size[1] % 2 == 0)):
            cmd.extend([
                '-pix_fmt', 'yuv420p'
            ])
        cmd.extend([
            filename
        ])

        popen_params = {"stdout": sp.DEVNULL,
                        "stderr": logfile,
                        "stdin": sp.PIPE}

        # This was added so that no extra unwanted window opens on windows
        # when the child process is created
        if os.name == "nt":
            popen_params["creationflags"] = 0x08000000  # CREATE_NO_WINDOW

        self.proc = sp.Popen(cmd, **popen_params)

    def write_frame(self, img_array):
        """ Writes one frame in the file."""
        try:
               self.proc.stdin.write(img_array.tobytes())
        except IOError as err:
            _, ffmpeg_error = self.proc.communicate()
            error = (str(err) + ("\n\nMoviePy error: FFMPEG encountered "
                                 "the following error while writing file %s:"
                                 "\n\n %s" % (self.filename, str(ffmpeg_error))))

            if b"Unknown encoder" in ffmpeg_error:

                error = error+("\n\nThe video export "
                  "failed because FFMPEG didn't find the specified "
                  "codec for video encoding (%s). Please install "
                  "this codec or change the codec when calling "
                  "write_videofile. For instance:\n"
                  "  >>> clip.write_videofile('myvid.webm', codec='libvpx')")%(self.codec)

            elif b"incorrect codec parameters ?" in ffmpeg_error:

                 error = error+("\n\nThe video export "
                  "failed, possibly because the codec specified for "
                  "the video (%s) is not compatible with the given "
                  "extension (%s). Please specify a valid 'codec' "
                  "argument in write_videofile. This would be 'libx264' "
                  "or 'mpeg4' for mp4, 'libtheora' for ogv, 'libvpx for webm. "
                  "Another possible reason is that the audio codec was not "
                  "compatible with the video codec. For instance the video "
                  "extensions 'ogv' and 'webm' only allow 'libvorbis' (default) as a"
                  "video codec."
                  )%(self.codec, self.ext)

            elif  b"encoder setup failed" in ffmpeg_error:

                error = error+("\n\nThe video export "
                  "failed, possibly because the bitrate you specified "
                  "was too high or too low for the video codec.")

            elif b"Invalid encoder type" in ffmpeg_error:

                error = error + ("\n\nThe video export failed because the codec "
                  "or file extension you provided is not a video")

            raise IOError(error)

    def close(self):
        if self.proc:
            self.proc.stdin.close()
            if self.proc.stderr is not None:
                self.proc.stderr.close()
            self.proc.wait()

        self.proc = None

    # Support the Context Manager protocol, to ensure that resources are cleaned up.

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, traceback):
        self.close()

if __name__ == '__main__':

    ## sample program for a GEV camera
    ## target is to write the YUV video data without further conversion
    ##
    import pypylon.pylon as py

    cam = py.InstantCamera(py.TlFactory.GetInstance().CreateFirstDevice())
    cam.Open()

    cam.PixelFormat = "YUV422Packed"

    with FFMPEG_VideoWriter("ffmpeg_demo.avi",(cam.Height(), cam.Width()), fps=30, pixfmt="uyvy422") as writer:

        cam.StartGrabbingMax(1000)
        while cam.IsGrabbing():
            res = cam.RetrieveResult(1000)
            writer.write_frame(res.Array)
            print(res.BlockID)
            res.Release()
denisb411 commented 4 years ago

@thiesmoeller I'm using your code to record videos from Basler camera but I don't know how to calculate precisely the total amount of time I want to record i.e. the number I need to insert on cam.StartGrabbingMax().

I calculated this number by the fps*seconds_I_want_to_record but it's not working. When I use for example, fps=24 and seconds=10 it just records ~5 seconds on the .avi file.

import datetime
import sys

def initiate_and_setup_cam(fps=24):
    # enable emulation 
    import os
    os.environ["PYLON_CAMEMU"] = "1"

    cam = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
    cam.Open()
    cam.ImageFilename = img_dir
    cam.ImageFileMode = "On" # enable image file test pattern
    cam.TestImageSelector = "Off" # disable testpattern [ image file is "real-image"]
    cam.PixelFormat = "Mono8" # choose one pixel format. camera emulation does conversion on the fly

    cam.Height = height
    cam.Width = width

    cam.AcquisitionFrameRateAbs.SetValue(fps);

    return cam

if __name__ == '__main__':

    total_record_time = 2 * 60 ## in seconds
    chunks_time = 10 ## in seconds

    fps = 24

    time_initiated = datetime.datetime.now()
    while True:
        time_initiated_chunk = datetime.datetime.now()
        chunk_formatted_time = time_initiated_chunk.strftime("%d-%m-%Y-%H-%M-%S")
        with FFMPEG_VideoWriter('./recorded-videos/' + chunk_formatted_time + '.avi',(cam.Height(), cam.Width()), fps=24, pixfmt="rgba") as writer:
            while (time_initiated_chunk + datetime.timedelta(seconds=chunks_time) > datetime.datetime.now()):
                cam = initiate_and_setup_cam(fps)
                cam.StartGrabbingMax(fps)
                while cam.IsGrabbing():
                    res = cam.RetrieveResult(1000)
                    writer.write_frame(res.Array)
                    res.Release()
        if (time_initiated + datetime.timedelta(seconds=total_record_time) < datetime.datetime.now()):
            break

What I am doing wrong? Is there a way I can use an while True loop to record infinitely (until a break condition, in this case)? If so, I could use datetime resources for breaking the loop.

Currently I'm now using a real camera yet, just the camera simulator, but I don't think this will impact.

denisb411 commented 4 years ago

The fps problem was being caused by the custom images that I was using at the camera simulation. It seems that the simulation can't produce videos larger than 20 fps.

To solve this, I needed to deactivate the custom image set that I was using, then simulation worked fine.

felixmaldonadoos commented 3 years ago

@denisb411 , do you mind posting how you managed to solve it? I have had issues maintaining steady fps when turning jpg to avi in cv2. I have been trying to work on a solution that doesn't store the images locally, rather a cache that theoretically continuously appends into the avi file. I have tried the approach using the image_time * image_fps = num_of_images_to_take but how can I store those images in an array, append continuously into the image array, and extract images from that array and run thru cv2 writer.

Thanks in advance.

mijalapenos commented 2 years ago

I'll share my code in case it helps someone. To cap the fps I am setting the AcquisitionFrameRate parameter to the desired value but that might not work for higher values and might not be 100% precise, not sure.

import pypylon.pylon as pylon
from imageio import get_writer

fps = 5  # Hz
time_to_record = 60  # seconds
images_to_grab = fps * time_to_record

tlf = pylon.TlFactory.GetInstance()
devices = tlf.EnumerateDevices()

cam = pylon.InstantCamera(tlf.CreateDevice(devices[0]))
cam.Open()
print("Using device ", cam.GetDeviceInfo().GetModelName())
cam.AcquisitionFrameRate.SetValue(fps)

writer = get_writer(
       'output-filename.mkv',  # mkv players often support H.264
        fps=fps,  # FPS is in units Hz; should be real-time.
        codec='libx264',  # When used properly, this is basically "PNG for video" (i.e. lossless)
        quality=None,  # disables variable compression
        ffmpeg_params=[  # compatibility with older library versions
            '-preset',   # set to fast, faster, veryfast, superfast, ultrafast
            'fast',      # for higher speed but worse compression
            '-crf',      # quality; set to 0 for lossless, but keep in mind
            '24'         # that the camera probably adds static anyway
        ]
)

print(f"Recording {time_to_record} second video at {fps} fps")
cam.StartGrabbingMax(images_to_grab, pylon.GrabStrategy_OneByOne)
while cam.IsGrabbing():
    with cam.RetrieveResult(1000, pylon.TimeoutHandling_ThrowException) as res:
        if res.GrabSucceeded():
            img = res.Array
            writer.append_data(img)
            print(res.BlockID, end='\r')
            res.Release()
        else:
            print("Grab failed")
            # raise RuntimeError("Grab failed")

print("Saving...", end=' ')
cam.StopGrabbing()
cam.Close()
print("Done")

I think precise image acquisition according to the desired fps might be possible using triggers but I have not looked into that. This might have some answers: https://docs.baslerweb.com/resulting-frame-rate

mijalapenos commented 2 years ago

I am going to share a problem I encountered: I used an RGB8 color profile and could not reach the 14 FPS at full resolution written in the camera's documentation. Solved it by using another format (YCbCr422_8) and converting the grabbed image via openCV (img = cv2.cvtColor(res.Array, cv2.COLOR_YUV2RGB_YUY2)).

thiesmoeller commented 2 years ago

Hi @mijalapenos, Framerates are typically given in the highest throughput scenario, which is Bayer Raw transport. In your use case of recording H.264 video transferring RGB over the wire or even converting to RGB before encoding is wasting ressources. The video encoders work in YUV format. So keeping it in this format will give you highest performance both in FPS on the wire as in system load in your host

mijalapenos commented 2 years ago

Hello @thiesmoeller, I see your point, I was trying to achieve this for at least half a day. However, I could not find a pixel format in the ffmpeg which works with any of the raw formats. I always receive a message such as Incompatible pixel format 'bayer_gbrg8' for codec 'libx264', auto-selecting format 'yuv444p' and then the video appears to be black and white. Do you have any suggestions how to resolve this? Thank you

thiesmoeller commented 2 years ago

Bayer format would give you the highest on-the-wire framerate, but you will get a high system load ( bayer interpolation is computationally expensive ). What you tried will be seen by libx264 as monochome data..

So you have two options ( maybe more ;-) ):

mijalapenos commented 2 years ago

I was aiming for the first option but my camera (daA2500-14uc) does not support YUV422Packed format, just the YCbCr422_8, which does not seem to have any corresponding pixel format in ffmpeg sadly. Anyways, thank you for your assistance!

thiesmoeller commented 2 years ago

It is supported: FourCC code is YUY2

mijalapenos commented 2 years ago

Seems like -pix_fmt yuyv422 works only for -vcodec rawvideo, which generates extremely large files (10 minutes ~ 74 GB). Other encoders switch to another format and therefore produce black & white footage.

bjbraun commented 1 year ago

Hi, thanks for the great code example @thiesmoeller! I am using a Basler ace acA1300-200uc USB3.0 camera and the code seems to work in general. I am just unsure about the pixel format and the codec.

If I use the pixel format "YCbCr422_8" from the camera (cam.PixelFormat = "YCbCr422_8) and the pixfmt="uyuv422", my output video seems to be in the wrong color space (just green and blue). If I use pixfmt="yuyv422" the final video has the correct colors. Did I understand something wrong or shouldn't YCbCr422_8 be equal to the YUV422Packed format that you use and the pixel order from Basler should actually be in the uyuv order? I just want to make sure that there are no hidden mistakes in my videos.

And regarding the playback speed, it seems to be a lot slower than real-time. Is it generally necessary to adjust the acquisition frame rate of the camera (or other parameters) also to have a real-time playback speed?

Thanks a lot already for your help!

meetAndEgg commented 1 year ago

As a sample for a GEV camera this code shows:

  • Using ffmpeg process to record video that is created in a python process.
  • Store YUV422 color data without conversion to RGB directly from the camera to H.264

Code to use the videowriter class is in the main at the end

import subprocess as sp
import os

### for demonstration of how to write video data
### this class is an excerpt from the project moviepy https://github.com/Zulko/moviepy.git moviepy/video/io/ffmpeg_writer.py
###
class FFMPEG_VideoWriter:
    """ A class for FFMPEG-based video writing.

    A class to write videos using ffmpeg. ffmpeg will write in a large
    choice of formats.

    Parameters
    -----------

    filename
      Any filename like 'video.mp4' etc. but if you want to avoid
      complications it is recommended to use the generic extension
      '.avi' for all your videos.

    size
      Size (width,height) of the output video in pixels.

    fps
      Frames per second in the output video file.

    codec
      FFMPEG codec. It seems that in terms of quality the hierarchy is
      'rawvideo' = 'png' > 'mpeg4' > 'libx264'
      'png' manages the same lossless quality as 'rawvideo' but yields
      smaller files. Type ``ffmpeg -codecs`` in a terminal to get a list
      of accepted codecs.

      Note for default 'libx264': by default the pixel format yuv420p
      is used. If the video dimensions are not both even (e.g. 720x405)
      another pixel format is used, and this can cause problem in some
      video readers.

    audiofile
      Optional: The name of an audio file that will be incorporated
      to the video.

    preset
      Sets the time that FFMPEG will take to compress the video. The slower,
      the better the compression rate. Possibilities are: ultrafast,superfast,
      veryfast, faster, fast, medium (default), slow, slower, veryslow,
      placebo.

    bitrate
      Only relevant for codecs which accept a bitrate. "5000k" offers
      nice results in general.

    withmask
      Boolean. Set to ``True`` if there is a mask in the video to be
      encoded.

    """

    def __init__(self, filename, size, fps, codec="libx264", audiofile=None,
                 preset="medium", bitrate=None, pixfmt="rgba",
                 logfile=None, threads=None, ffmpeg_params=None):

        if logfile is None:
            logfile = sp.PIPE

        self.filename = filename
        self.codec = codec
        self.ext = self.filename.split(".")[-1]

        # order is important
        cmd = [
            "ffmpeg-4.2.1-win64-static/bin/ffmpeg",
            '-y',
            '-loglevel', 'error' if logfile == sp.PIPE else 'info',
            '-f', 'rawvideo',
            '-vcodec', 'rawvideo',
            '-s', '%dx%d' % (size[1], size[0]),
            '-pix_fmt', pixfmt,
            '-r', '%.02f' % fps,
            '-i', '-', '-an',
        ]
        cmd.extend([
            '-vcodec', codec,
            '-preset', preset,
        ])
        if ffmpeg_params is not None:
            cmd.extend(ffmpeg_params)
        if bitrate is not None:
            cmd.extend([
                '-b', bitrate
            ])
        if threads is not None:
            cmd.extend(["-threads", str(threads)])

        if ((codec == 'libx264') and
                (size[0] % 2 == 0) and
                (size[1] % 2 == 0)):
            cmd.extend([
                '-pix_fmt', 'yuv420p'
            ])
        cmd.extend([
            filename
        ])

        popen_params = {"stdout": sp.DEVNULL,
                        "stderr": logfile,
                        "stdin": sp.PIPE}

        # This was added so that no extra unwanted window opens on windows
        # when the child process is created
        if os.name == "nt":
            popen_params["creationflags"] = 0x08000000  # CREATE_NO_WINDOW

        self.proc = sp.Popen(cmd, **popen_params)

    def write_frame(self, img_array):
        """ Writes one frame in the file."""
        try:
               self.proc.stdin.write(img_array.tobytes())
        except IOError as err:
            _, ffmpeg_error = self.proc.communicate()
            error = (str(err) + ("\n\nMoviePy error: FFMPEG encountered "
                                 "the following error while writing file %s:"
                                 "\n\n %s" % (self.filename, str(ffmpeg_error))))

            if b"Unknown encoder" in ffmpeg_error:

                error = error+("\n\nThe video export "
                  "failed because FFMPEG didn't find the specified "
                  "codec for video encoding (%s). Please install "
                  "this codec or change the codec when calling "
                  "write_videofile. For instance:\n"
                  "  >>> clip.write_videofile('myvid.webm', codec='libvpx')")%(self.codec)

            elif b"incorrect codec parameters ?" in ffmpeg_error:

                 error = error+("\n\nThe video export "
                  "failed, possibly because the codec specified for "
                  "the video (%s) is not compatible with the given "
                  "extension (%s). Please specify a valid 'codec' "
                  "argument in write_videofile. This would be 'libx264' "
                  "or 'mpeg4' for mp4, 'libtheora' for ogv, 'libvpx for webm. "
                  "Another possible reason is that the audio codec was not "
                  "compatible with the video codec. For instance the video "
                  "extensions 'ogv' and 'webm' only allow 'libvorbis' (default) as a"
                  "video codec."
                  )%(self.codec, self.ext)

            elif  b"encoder setup failed" in ffmpeg_error:

                error = error+("\n\nThe video export "
                  "failed, possibly because the bitrate you specified "
                  "was too high or too low for the video codec.")

            elif b"Invalid encoder type" in ffmpeg_error:

                error = error + ("\n\nThe video export failed because the codec "
                  "or file extension you provided is not a video")

            raise IOError(error)

    def close(self):
        if self.proc:
            self.proc.stdin.close()
            if self.proc.stderr is not None:
                self.proc.stderr.close()
            self.proc.wait()

        self.proc = None

    # Support the Context Manager protocol, to ensure that resources are cleaned up.

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, traceback):
        self.close()

if __name__ == '__main__':

    ## sample program for a GEV camera
    ## target is to write the YUV video data without further conversion
    ##
    import pypylon.pylon as py

    cam = py.InstantCamera(py.TlFactory.GetInstance().CreateFirstDevice())
    cam.Open()

    cam.PixelFormat = "YUV422Packed"

    with FFMPEG_VideoWriter("ffmpeg_demo.avi",(cam.Height(), cam.Width()), fps=30, pixfmt="uyvy422") as writer:

        cam.StartGrabbingMax(1000)
        while cam.IsGrabbing():
            res = cam.RetrieveResult(1000)
            writer.write_frame(res.Array)
            print(res.BlockID)
            res.Release()

Hello, Im using your FFMPEG_VideoWriter to record video, I am using it qyqt with multithreading, one of them to show videos and the other to record video with FFMPEG_VideoWriter. In order to synchronize the frames transmitted by the camera with the video writer, I am currently using a Queue. However, even if aQueue is used, the fps per frame still fluctuate. How can I optimize the fps to be as stable as possible? Following is my code(I've tried to simplify my code, but still a little long. The main thread is MyMainForm ,videowriter and myvideo are both placed in a separate qthread. myvideo has a dead loop to catch frames and send it to main thread when the camera is open. MyMainForm will show the frame in qlabel and if recording, put the frame in the queue so that the videowriter can get the frame from the queue and write it into the video) I calculated the frame rates of show frame with qlabel and record videos respectively, both fluctuate greatly(from 60 (even lower) to 500 (even higher)). I am sure that the frame size is small enough (800*800) to meet the requirement of fps. I have discovered that even I set the AcquisitionFrameRateEnable = True and AcquisitionFrameRate = 120 and GrabStrategy_LatestImages, the fps from the camera fluctuate greatly, maybe that's the reason, but I don't how to fix it.


# VideoWriter will be placed in a qthread, and when the record button is clicked
# `self.isRecording` will be set to `True` and the `record` function will be called with a signal
class VideoWriter(QObject):
def __init__(self):
super().__init__()
self.frame_queue = Queue()
self.isRecording = False
def record(self, height, width):
    video_path = f"./videos/video_{time.time()}.avi"
    fps = min(360000000 // (height * width * 3), 120) # The theoretical value of fps has reached over 180
    with FFMPEG_VideoWriter(video_path, (height, width), fps=fps, pixfmt="rgb24") as writer:
        cur_time = time.time()
        while self.isRecording or not self.frame_queue.empty():
            if not self.frame_queue.empty():
                writer.write_frame(self.frame_queue.get())
                cur_time = time.time()
    self.frame_queue = Queue()

...

MyVideo will be placed in a qthread, and this dead loop(grab frames and send it to main thread) will be called as soon as the thread start

class MyVideo(QObject): frame_signal = pyqtSignal(ndarray) ... def play(self): while self.isLiving: if self.cap.IsGrabbing(): res = self.cap.RetrieveResult(1000) if res: # Prevent null pointer exceptions if res.GrabSucceeded(): frame = self.converter.Convert(res) self.frame_signal.emit(frame.Array) # transmit frame to main thread res.Release() ...

class MyMainForm(QWidget): start_record_signal = pyqtSignal(int, int) def init(self): super().init() self.ui = uic.loadUi("MainWindow.ui")

    # video thread to receive frames from camera
    self.video = MyVideo()
    self.video_width = self.ui.label.width()
    self.video_height = self.ui.label.height()
    self.video_thread = QThread()
    self.video.moveToThread(self.video_thread)

    # video writer thread to record video
    self.video_writer = ffmpeg_video_writer.VideoWriter()
    self.video_writer_thread = QThread()
    self.video_writer.moveToThread(self.video_writer_thread)

    self.time = 0
    self.bind()
    self.video_thread.start()
    self.video_writer_thread.start()
def bind(self):
    # video
    self.video.frame_signal.connect(self.receive_image)
    self.video_thread.started.connect(self.video.play)
    # video_writer
    self.start_record_signal.connect(self.video_writer.record)
def receive_image(self, frame):
    # show frame received in the qlabel
    if self.video_writer.isRecording: # if recording, put the frame in the frame queue
        self.video_writer.frame_queue.put(frame)
    scale_level = self.scale_level_list[self.scale_level_index]
    # frame = cv2.resize(frame, size)
    height, width, channel = frame.shape
    qframe = QImage(
        frame.data,
        width,
        height,
        channel * width,
        QImage.Format_RGB888,
    )
    self.ui.label.setFixedSize(width, height)
    self.ui.scrollAreaWidgetContents.setFixedSize(width, height)
    self.ui.label.setPixmap(QPixmap.fromImage(qframe))

def record_btn_clicked(self):
    # when record button clicked, this func will be called
    if self.ui.pushButton_record.text() == "录制":
        self.video_writer.isRecording = True
        self.start_record_signal.emit(self.ui.label.height(), self.ui.label.width()) # call the record func to start recording
        self.ui.pushButton_play.setEnabled(False)
        self.ui.pushButton_record.setText("停止")
    else:
        self.video_writer.isRecording = False
        self.ui.pushButton_play.setEnabled(True)
        self.ui.pushButton_record.setText("录制")
thiesmoeller commented 1 year ago

One issue, that you might run into is python's GIL. This might block your threads longer than expected.

One option is to run the ffmpeg writer using the multiprocessing. In this case you should only transfer the raw video frame data via a queue.

Example here https://github.com/basler/pypylon/issues/513#issuecomment-1346405311

If you want yo write every frame into file, but show live video in your pyqt GUI, you should add a decoupler between the grab thread and the GUI thread. (dropping queue with length 1)

meetAndEgg commented 1 year ago

Thanks for help! I already solve this problem with cv2.imwrite. The problem is caused by queue, not ffmpeg_writer. I found that the problem arises here in my code.

# the problem in VideoWriter.record()
while self.isRecording or not self.frame_queue.empty():
if not self.frame_queue.empty():
writer.write_frame(self.frame_queue.get())
# the problem in MyVideo.play()
self.videowriter.frame_queue.put(frame) # I forget writing this line in my question

I have fixed the question by changing those codes into following, and I can save videos with a stable frame rate of 120 frames:

# VideoWriter
writer = cv2.VideoWriter(video_path, fourcc, 120, (width, height))
try:
while (frame := self.frame_queue.get()) is not None:
writer.write(frame)
finally:
writer.release()
# MyVideo
self.cap.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
recording = False
while self.cap.IsGrabbing():
res = self.cap.RetrieveResult(1000)
if res:
if res.GrabSucceeded():
frame = self.converter.Convert(res).Array
if self.recording:
recording = True
self.video_writer.frame_queue.put(frame, block=False)
elif recording:
self.video_writer.frame_queue.put(None, block=False) # put None to stop writing
recording = False
res.Release()

As you said, I think the problem is the consumer(VideoWriter) thread spins idly and takes away resources from the producer(MyVideo) thread through the global interpreter lock.