basler / pypylon

The official python wrapper for the pylon Camera Software Suite
http://www.baslerweb.com
BSD 3-Clause "New" or "Revised" License
556 stars 207 forks source link

Acquiring frames for two cameras #671

Open canliu0414 opened 10 months ago

canliu0414 commented 10 months ago

Hi,

I am trying to use two basler cameras (acA1300-200uc) to simultansouely acquire videos, using a python script running in pycharm. Below is my current code:

from pypylon import pylon
import cv2
import time

# Initialize the transport layer factory
tlFactory = pylon.TlFactory.GetInstance()

# Get all attached devices and exit the application if no device is found
devices = tlFactory.EnumerateDevices()
if len(devices) < 2:
    raise pylon.RuntimeException("At least two cameras are required.")

# Initialize the first camera
camera1 = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateDevice(devices[0]))
camera1.Open()

# Initialize the second camera
camera2 = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateDevice(devices[1]))
camera2.Open()

# Configure camera settings (you can customize these settings)
camera1.ExposureTime.SetValue(10000)  # Set exposure time in microseconds
camera1.AcquisitionFrameRateEnable.SetValue(True)
camera1.AcquisitionFrameRate.SetValue(30)  # Set frame rate

camera2.ExposureTime.SetValue(10000)  # Set exposure time in microseconds
camera2.AcquisitionFrameRateEnable.SetValue(True)
camera2.AcquisitionFrameRate.SetValue(30)  # Set frame rate

# Set TTL output configuration for both cameras
camera1.LineSelector.SetValue("Line2")
camera1.LineMode.SetValue("Output")
camera1.LineSource.SetValue("ExposureActive")

camera2.LineSelector.SetValue("Line2")
camera2.LineMode.SetValue("Output")
camera2.LineSource.SetValue("ExposureActive")

# Initialize the video writers for both cameras
fourcc = cv2.VideoWriter_fourcc(*'XVID')

out1 = cv2.VideoWriter('C:\\Users\\Cornell\\Desktop\\output1.avi', fourcc, 30.0,
                      (camera1.Width.GetValue(), camera1.Height.GetValue()), isColor=False)

out2 = cv2.VideoWriter('C:\\Users\\Cornell\\Desktop\\output2.avi', fourcc, 30.0,
                      (camera2.Width.GetValue(), camera2.Height.GetValue()), isColor=False)

recording = True  # Start recording when the script is run

# Start grabbing for both cameras
camera1.StartGrabbing(pylon.GrabStrategy_OneByOne)
camera2.StartGrabbing(pylon.GrabStrategy_OneByOne)

# Create smaller OpenCV windows for both camera streams
cv2.namedWindow('Camera 1 Stream', cv2.WINDOW_NORMAL)
cv2.resizeWindow('Camera 1 Stream', 640, 480)  # Adjust the size as needed

cv2.namedWindow('Camera 2 Stream', cv2.WINDOW_NORMAL)
cv2.resizeWindow('Camera 2 Stream', 640, 480)  # Adjust the size as needed

print('Press "e" to stop acquisition')

start_time = time.time()
num_frames1 = 0
num_frames2 = 0

while recording:
    key = cv2.waitKey(1) & 0xFF

    # Quit video when "e" is pressed
    if key == ord('e'):
        recording = False
        break

    # Retrieve results from both cameras
    grab1 = camera1.RetrieveResult(100, pylon.TimeoutHandling_ThrowException)
    grab2 = camera2.RetrieveResult(100, pylon.TimeoutHandling_ThrowException)

    if grab1.GrabSucceeded() and grab2.GrabSucceeded():
        frame1 = grab1.Array
        frame2 = grab2.Array

        cv2.imshow('Camera 1 Stream', frame1)
        cv2.imshow('Camera 2 Stream', frame2)

        out1.write(frame1)
        out2.write(frame2)

        num_frames1 += 1
        num_frames2 += 1

# Calculate the elapsed time
end_time = time.time()
elapsed_time = end_time - start_time

# Clean up for both cameras
camera1.Close()
out1.release()
camera2.Close()
out2.release()
cv2.destroyAllWindows()

print(f"Recording finished. Elapsed time: {elapsed_time:.2f} seconds")
print(f"Number of frames acquired (Camera 1): {num_frames1}")
print(f"Number of frames acquired (Camera 2): {num_frames2}")
print("Videos saved as 'output1.avi' and 'output2.avi'.")

the current problem is that the saved video is not each at 30Hz, but at about 24Hz. But the display from the openCV window during recording seems to be fine, so I am suspecting the problem is with the writing of the video file.

Could someone provide some suggestions on this?

Eventually, I would want to have a hardware triggering with arduino. Now in this script, one camera starts after the other. But ideallyt, I would use python script to trigger arduino, to generate TTL pulses to trigger every frame. But I want to work out this first so that I make sure I can save two videos with desired frame rate.

Thank you!

thiesmoeller commented 10 months ago

You have to use the InstantCameraArray class to wait for frames in parallel. Your code is waiting on both cameras sequentially.

See the companion repository https://github.com/basler/pypylon-samples with samples for multicamera and HW trigger

canliu0414 commented 10 months ago

Thanks for your quick response!

I changed to InstantCameraArray, and with some modification it seems to be working, both videos recorded reached the desired frame rate.

from pypylon import pylon
import numpy as np
import cv2
import time

# Create smaller OpenCV windows for both camera streams
cv2.namedWindow('Camera 1 Stream', cv2.WINDOW_NORMAL)
cv2.resizeWindow('Camera 1 Stream', 640, 512)  # Adjust the size as needed

cv2.namedWindow('Camera 2 Stream', cv2.WINDOW_NORMAL)
cv2.resizeWindow('Camera 2 Stream', 640, 512)  # Adjust the size as needed

num_frames1 = 0
num_frames2 = 0

tlFactory = pylon.TlFactory.GetInstance()
devices = tlFactory.EnumerateDevices()
if len(devices) < 2:
    raise pylon.RUNTIME_EXCEPTION("At least two cameras are required.")

cameras = pylon.InstantCameraArray(2)

for i, camera in enumerate(cameras):
    camera.Attach(tlFactory.CreateDevice(devices[i]))
    camera.Open()

    camera.AcquisitionFrameRateEnable.SetValue(True)
    camera.AcquisitionFrameRate.SetValue(30)  # Set frame rate to 30 H
    camera.ExposureTime.SetValue(10000)

    camera.LineSelector.SetValue("Line2")
    camera.LineMode.SetValue("Output")
    camera.LineSource.SetValue("ExposureActive")

# Create VideoWriters for both cameras
fourcc = cv2.VideoWriter_fourcc(*'XVID')

out1 = cv2.VideoWriter('D:\\videoCaptures\\output1.avi', fourcc, 30.0, (1280,1024), isColor=False)
out2 = cv2.VideoWriter('D:\\videoCaptures\\output2.avi', fourcc, 30.0, (1280,1024), isColor=False)

# Starts grabbing for all cameras
cameras.StartGrabbing(pylon.GrabStrategy_LatestImageOnly,
                      pylon.GrabLoop_ProvidedByUser)

start_time = time.time()

while cameras.IsGrabbing():
    # grabResult1 = cameras[0].RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
    # grabResult2 = cameras[1].RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
    #
    # if grabResult1.GrabSucceeded() and grabResult2.GrabSucceeded():
    #     im1 = grabResult1.GetArray()
    #     im2 = grabResult2.GetArray()
    #
    #     # Save the frames to video files
    #     out1.write(im1)
    #     out2.write(im2)
    #
    #     num_frames1 += 1
    #     num_frames2 += 1
    #
    #     cv2.imshow('Acquisition', np.hstack([im1, im2]))

    grabResult = cameras.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
    if grabResult.GrabSucceeded():

        cameraID = grabResult.GetCameraContext()
        img = grabResult.GetArray()

        if cameraID == 0:

            num_frames1 += 1
            cv2.imshow('Camera 1 Stream', img)
            out1.write(img)

        if cameraID == 1:
            num_frames2 += 1
            cv2.imshow('Camera 2 Stream', img)
            out2.write(img)

    # If ESC is pressed, exit and destroy the window
        if cv2.waitKey(1) & 0xFF == 27:
            break

# Calculate the elapsed time
end_time = time.time()
elapsed_time = end_time - start_time
frame_rate = num_frames1/elapsed_time

out1.release()
out2.release()
cv2.destroyAllWindows()

print(f"Recording finished. Elapsed time: {elapsed_time:.2f} seconds")
print(f"Number of frames acquired (Camera 1): {num_frames1}")
print(f"Number of frames acquired (Camera 2): {num_frames2}")
print(f"Frame rate (camera1) is: {frame_rate}")

And I will move on and try the hardware trigger.

canliu0414 commented 10 months ago

just a follow up on this thread. I am trying to record with my laptop due to some constriction, it turned out the laptop will only reach 30Hz frame rate recording when plugged to power, but could not reach 30Hz when running on itw own. I can potentially still work with it, but what's the reason behind?

thiesmoeller commented 10 months ago

When on battery a notebook will apply numerous power saving actions.

E.g. allowing the CPU to go to deeper sleep states, this will massively increase interrupt wakeup time. Or reduce the RAM speed.

Our recommendations to reach stable high performance is to disable energy savings completely

canliu0414 commented 10 months ago

thanks again for the response.

I went on trying hardware triggering. The basic goal was to have a python script, that when a keyboard key is hit ('s'), it talks to arduino to start generating TTLs at certain frequencies (40Hz), this arduino TTLs is connected to the input line of the camera, and will trigger each frame, at a 40Hz frequency. When 'e' key is hit, arduino stops generating TTL, acquisition stops. I have prepared one script on python, and one script on arduino. However, now I can see that my triggering is at 40Hz from arduino, but the frame rate from the camera is around 20 Hz. Could you provide some suggestions on what might be wrong? thank you!

here is the code on arduino:

const int outputPin = 2;  // Output pin to generate TTL pulses
unsigned long pulseLength = 15000;  // Pulse length in microseconds (15 milliseconds)
unsigned long pulseInterval = 25000;  // Pulse start interval in microseconds (25 milliseconds)
bool generatePulses = false;  // Flag to control pulse generation
unsigned long previousMicros = 0;

void setup() {
  pinMode(outputPin, OUTPUT);
  Serial.begin(9600);  // Initialize serial communication
}

void loop() {
  while (Serial.available() > 0) {
    char command = Serial.read();
    if (command == 's') {  // Start pulse generation
      generatePulses = true;
      Serial.println("Pulse generation started.");
    } else if (command == 'e') {  // Stop pulse generation
      generatePulses = false;
      digitalWrite(outputPin, LOW);  // Ensure output is low when stopped
      Serial.println("Pulse generation stopped.");
    }
  }

  if (generatePulses) {
    unsigned long currentMicros = micros();
    if (currentMicros - previousMicros >= pulseInterval) {
      previousMicros = currentMicros;
      digitalWrite(outputPin, HIGH);  // Turn on TTL pulse
      delayMicroseconds(pulseLength);  // Pulse length
      digitalWrite(outputPin, LOW);   // Turn off TTL pulse
    }
  }
}

and here is the code in python:

from pypylon import pylon
import cv2
import time
import serial

# Initialize the transport layer factory
tlFactory = pylon.TlFactory.GetInstance()

# Get all attached devices and exit the application if no device is found
devices = tlFactory.EnumerateDevices()
if len(devices) < 1:  # Change this to 1 since you're testing with one camera
    raise pylon.RuntimeException("At least one camera is required.")  # Update the message

# Initialize the first camera
camera1 = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateDevice(devices[0]))
camera1.Open()

# Set TTL output configuration for the camera
camera1.LineSelector.SetValue("Line2")
camera1.LineMode.SetValue("Output")
camera1.LineSource.SetValue("ExposureActive")
camera1.LineInverter.SetValue(True)

# Set TTL trigger input configuration for the camera
camera1.LineSelector.SetValue("Line4")
camera1.LineMode.SetValue("Input")
camera1.TriggerSelector.SetValue("FrameStart")
camera1.TriggerSource.SetValue("Line4")
camera1.TriggerMode.SetValue("On")
camera1.TriggerActivation.SetValue("FallingEdge")

# Set exposure mode and time for the camera
# camera1.ExposureMode.SetValue("Timed")
# camera1.ExposureTime.SetValue(15000)  # Set exposure time in microseconds (15 milliseconds)
camera1.ExposureMode.SetValue("TriggerWidth")

# Create an OpenCV window
cv2.namedWindow('Camera 1 Stream', cv2.WINDOW_NORMAL)

# Initialize the video writer for the camera
fourcc = cv2.VideoWriter_fourcc(*'XVID')

out1 = cv2.VideoWriter('D:\\videos\\output1.avi', fourcc, 40.0,
                      (camera1.Width.GetValue(), camera1.Height.GetValue()), isColor=False)

recording = False  # Start recording when "s" is pressed

# Initialize serial communication with Arduino
ser = serial.Serial('COM3', 9600)  # Replace 'COM3' with the correct port on your system

print('Press "s" to start acquisition, "e" to stop acquisition')

num_frames1 = 0

while True:
    key = cv2.waitKey(1) & 0xFF

    if key == ord('s'):
        if not recording:
            # Start TTL pulses and camera acquisition
            # ser.write('s'.encode())
            # time.sleep(1)  # Wait for Arduino to respond and start TTL
            camera1.StartGrabbing(pylon.GrabStrategy_OneByOne)
            ser.write('s'.encode())
            time.sleep(1)
            recording = True
            start_time = time.time()
            print("Acquisition started")

    elif key == ord('e'):
        if recording:
            # Stop TTL pulses and camera acquisition
            # ser.write('e'.encode())
            camera1.StopGrabbing()
            ser.write('e'.encode())
            recording = False
            end_time = time.time()
            elapsed_time = end_time - start_time
            print("Acquisition stopped")

            break

    if recording:
        # Retrieve results from the camera
        grab1 = camera1.RetrieveResult(100, pylon.TimeoutHandling_ThrowException)

        if grab1.GrabSucceeded():
            frame1 = grab1.Array

            cv2.imshow('Camera 1 Stream', frame1)

            out1.write(frame1)

            num_frames1 += 1

# Clean up for the camera
camera1.Close()
out1.release()
cv2.destroyAllWindows()
frameRate = num_frames1/elapsed_time

print(f"Recording finished. Elapsed time: {elapsed_time:.2f} seconds")
print(f"Number of frames acquired (Camera 1): {num_frames1}")
print(f"Frame rate is: {frameRate}")
print("Video saved as 'output1.avi'.")
HighImp commented 10 months ago

Hi, please measure your exposure active signal on Line2 and check the frequency. The ExposureActive Line should be nearly the same as your input signal, because of the TriggerWidth settings that you are using. Otherwise, you are overtriggering the camera. If both signals are identical, may your host could not handle the incomming images in time.

Just a try, can you please delete the lines:

cv2.imshow('Camera 1 Stream', frame1) out1.write(frame1)

and check the fps again?

SMA2016a commented 8 months ago

I'm reaching out regarding the status of a particular case on our GitHub repository.

Upon reviewing the case it seems that the issue or question addressed may no longer be relevant or current. To maintain the clarity and organization of our repository, I kindly request your assistance in updating the status of this case to "Closed" if it has been resolved or if it's no longer applicable