Closed xlukas10 closed 3 years ago
Yes you definitely should be able to record images from multiple cameras. Are you able to grab frames from each camera individually or is there already a problem with one of them when used on its own? Can you display a live stream from both cameras at the same time using the Vimba Viewer? Maybe using both cameras simultaneously leads to problems because the interface you are using cannot transfer all the data required for this.
Could you maybe provide the modified version you are trying to use for me to take a look at it?
The structure of your code would probably depend on what you want to do, but my first idea would be to create as many frame_queue
s as you have cameras and let each camera write to its own queue. Then you could have a single consumer, that takes all these queues and takes images from each of them to work with. But please be aware that this is a very naive approach that for example does not give any consideration to what happens if the cameras produce frames at different speeds. In the simplest case the consumer might take a frame from frame_queue
A and a frame from queue B but these two frames might not have been taken at the same time. If you do need synchronization between the frames, triggering would probably be the most reliable way to achieve this. But I guess the first step would be to get two cameras going at the same time...
Hello, yes, each camera individually works as intended. I did not try viewing both streams in Vimba Viewer and I will not be able to try it until monday or tuesday, so I will let you know then.
The modified code is here. Basically I tried creating two frame queues and two instances of the frame consumer. Also I changed the window title, so two windows will open. If I understood correctly, the original version should show the streams side by side in one window, but as I said, the original version did not work for me and neither the modified did.
Also if it is of any help, both cameras used are Manta 125b.
import copy
import cv2
import threading
import queue
import numpy
from typing import Optional
from vimba import *
FRAME_QUEUE_SIZE = 10
FRAME_HEIGHT = 480
FRAME_WIDTH = 480
def print_preamble():
print('////////////////////////////////////////////')
print('/// Vimba API Multithreading Example ///////')
print('////////////////////////////////////////////\n')
print(flush=True)
def add_camera_id(frame: Frame, cam_id: str) -> Frame:
# Helper function inserting 'cam_id' into given frame. This function
# manipulates the original image buffer inside frame object.
cv2.putText(frame.as_opencv_image(), 'Cam: {}'.format(cam_id), org=(0, 30), fontScale=1,
color=255, thickness=1, fontFace=cv2.FONT_HERSHEY_COMPLEX_SMALL)
return frame
def resize_if_required(frame: Frame) -> numpy.ndarray:
# Helper function resizing the given frame, if it has not the required dimensions.
# On resizing, the image data is copied and resized, the image inside the frame object
# is untouched.
cv_frame = frame.as_opencv_image()
if (frame.get_height() != FRAME_HEIGHT) or (frame.get_width() != FRAME_WIDTH):
cv_frame = cv2.resize(cv_frame, (FRAME_WIDTH, FRAME_HEIGHT), interpolation=cv2.INTER_AREA)
cv_frame = cv_frame[..., numpy.newaxis]
return cv_frame
def create_dummy_frame() -> numpy.ndarray:
cv_frame = numpy.zeros((50, 640, 1), numpy.uint8)
cv_frame[:] = 0
cv2.putText(cv_frame, 'No Stream available. Please connect a Camera.', org=(30, 30),
fontScale=1, color=255, thickness=1, fontFace=cv2.FONT_HERSHEY_COMPLEX_SMALL)
return cv_frame
def try_put_frame(q: queue.Queue, cam: Camera, frame: Optional[Frame]):
try:
q.put_nowait((cam.get_id(), frame))
except queue.Full:
pass
def set_nearest_value(cam: Camera, feat_name: str, feat_value: int):
# Helper function that tries to set a given value. If setting of the initial value failed
# it calculates the nearest valid value and sets the result. This function is intended to
# be used with Height and Width Features because not all Cameras allow the same values
# for height and width.
feat = cam.get_feature_by_name(feat_name)
try:
feat.set(feat_value)
except VimbaFeatureError:
min_, max_ = feat.get_range()
inc = feat.get_increment()
if feat_value <= min_:
val = min_
elif feat_value >= max_:
val = max_
else:
val = (((feat_value - min_) // inc) * inc) + min_
feat.set(val)
msg = ('Camera {}: Failed to set value of Feature \'{}\' to \'{}\': '
'Using nearest valid value \'{}\'. Note that, this causes resizing '
'during processing, reducing the frame rate.')
Log.get_instance().info(msg.format(cam.get_id(), feat_name, feat_value, val))
# Thread Objects
class FrameProducer(threading.Thread):
def __init__(self, cam: Camera, frame_queue: queue.Queue):
threading.Thread.__init__(self)
self.log = Log.get_instance()
self.cam = cam
self.frame_queue = frame_queue
self.killswitch = threading.Event()
def __call__(self, cam: Camera, frame: Frame):
# This method is executed within VimbaC context. All incoming frames
# are reused for later frame acquisition. If a frame shall be queued, the
# frame must be copied and the copy must be sent, otherwise the acquired
# frame will be overridden as soon as the frame is reused.
if frame.get_status() == FrameStatus.Complete:
if not self.frame_queue.full():
frame_cpy = copy.deepcopy(frame)
try_put_frame(self.frame_queue, cam, frame_cpy)
cam.queue_frame(frame)
def stop(self):
self.killswitch.set()
def setup_camera(self):
set_nearest_value(self.cam, 'Height', FRAME_HEIGHT)
set_nearest_value(self.cam, 'Width', FRAME_WIDTH)
# Try to enable automatic exposure time setting
try:
self.cam.ExposureAuto.set('Once')
except (AttributeError, VimbaFeatureError):
self.log.info('Camera {}: Failed to set Feature \'ExposureAuto\'.'.format(
self.cam.get_id()))
self.cam.set_pixel_format(PixelFormat.Mono8)
def run(self):
self.log.info('Thread \'FrameProducer({})\' started.'.format(self.cam.get_id()))
try:
with self.cam:
self.setup_camera()
try:
self.cam.start_streaming(self)
self.killswitch.wait()
finally:
self.cam.stop_streaming()
except VimbaCameraError:
pass
finally:
try_put_frame(self.frame_queue, self.cam, None)
self.log.info('Thread \'FrameProducer({})\' terminated.'.format(self.cam.get_id()))
class FrameConsumer(threading.Thread):
def __init__(self, frame_queue: queue.Queue):
threading.Thread.__init__(self)
self.log = Log.get_instance()
self.frame_queue = frame_queue
def run(self):
IMAGE_CAPTION = 'Multithreading Example: Press <Enter> to exit'
KEY_CODE_ENTER = 13
frames = {}
alive = True
self.log.info('Thread \'FrameConsumer\' started.')
while alive:
# Update current state by dequeuing all currently available frames.
frames_left = self.frame_queue.qsize()
while frames_left:
try:
cam_id, frame = self.frame_queue.get_nowait()
except queue.Empty:
break
# Add/Remove frame from current state.
if frame:
frames[cam_id] = frame
else:
frames.pop(cam_id, None)
frames_left -= 1
# Construct image by stitching frames together.
if frames:
cv_images = [resize_if_required(frames[cam_id]) for cam_id in sorted(frames.keys())]
cv2.imshow(cam_id, numpy.concatenate(cv_images, axis=1))
# If there are no frames available, show dummy image instead
else:
cv2.imshow(IMAGE_CAPTION, create_dummy_frame())
# Check for shutdown condition
if KEY_CODE_ENTER == cv2.waitKey(10):
cv2.destroyAllWindows()
alive = False
self.log.info('Thread \'FrameConsumer\' terminated.')
class MainThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.frame_queue1 = queue.Queue(maxsize=FRAME_QUEUE_SIZE)
self.frame_queue2 = queue.Queue(maxsize=FRAME_QUEUE_SIZE)
self.producers = {}
self.producers_lock = threading.Lock()
def __call__(self, cam: Camera, event: CameraEvent):
# New camera was detected. Create FrameProducer, add it to active FrameProducers
if event == CameraEvent.Detected:
with self.producers_lock:
self.producers[cam.get_id()] = FrameProducer(cam, self.frame_queue)
self.producers[cam.get_id()].start()
# An existing camera was disconnected, stop associated FrameProducer.
elif event == CameraEvent.Missing:
with self.producers_lock:
producer = self.producers.pop(cam.get_id())
producer.stop()
producer.join()
def run(self):
log = Log.get_instance()
consumer1 = FrameConsumer(self.frame_queue1)
consumer2 = FrameConsumer(self.frame_queue2)
vimba = Vimba.get_instance()
vimba.enable_log(LOG_CONFIG_INFO_CONSOLE_ONLY)
log.info('Thread \'MainThread\' started.')
with vimba:
# Construct FrameProducer threads for all detected cameras
i = 1
for cam in vimba.get_all_cameras():
if i==1:
self.producers[cam.get_id()] = FrameProducer(cam, self.frame_queue1)
i += 1
if i==2:
self.producers[cam.get_id()] = FrameProducer(cam, self.frame_queue2)
# Start FrameProducer threads
with self.producers_lock:
for producer in self.producers.values():
producer.start()
# Start and wait for consumer to terminate
vimba.register_camera_change_handler(self)
consumer1.start()
consumer2.start()
consumer1.join()
consumer2.join()
vimba.unregister_camera_change_handler(self)
# Stop all FrameProducer threads
with self.producers_lock:
# Initiate concurrent shutdown
for producer in self.producers.values():
producer.stop()
# Wait for shutdown to complete
for producer in self.producers.values():
producer.join()
log.info('Thread \'MainThread\' terminated.')
if __name__ == '__main__':
print_preamble()
main = MainThread()
main.start()
main.join()
Thank you for your suggestions. I tried the cameras with Vimba Viewer and they did work but like one frame per many seconds, so I figured that the connection may indeed be the problem. After I changed the resolution on both cameras to small numbers it started to work perfectly. In the end it turned out that my usb-c cable which I use to connect to a dock with ethernet port was causing the problem with low throughput of the network.
Once again thank you very much
Hello, I would like to ask if it is possible to grab frames from multiple cameras connected via GigE. If so, what are the prerequisities for it to work?
I tried multithreading_opencv.py example and always get frames from only one camera. I tried to modify the example slightly to grab each device to its own queue and just one queue will start to fill while the other is empty after starting both streams.
In Wireshark it seems that the PC communicates with both devices pretty much the same, so it seems like the frames are being sent but not handled properly by the script.
Am I doing something wrong or can anyone point me in the right direction?
Thank you very much in advance