alathon / EmguCVWallViz

1 stars 0 forks source link

[Code] [Server] Camera multiplexing #17

Open alathon opened 8 years ago

alathon commented 8 years ago

Write a multiplexer that takes as configuration an XML file with screen points for each of the X cameras. It should use the XML file to generate transformation matrices, which it can apply to blob event locations to get their 'screen' location.

The multiplexer should run as part of the server. To make the configuration file that it needs, requires a camera calibration app that should be run on the client (i.e. the touch-screen WallViz machine) that maps points from each camera to the physical screen space. Such an application has been made in the past, as has the multiplexer - Ask Mikkel/Søren/Anders. There is already a separate issue for the camera calibration application.

There are two parts to the multiplexer:

  1. An immutable class that is initialized with the transformation matrices, and has a single method to translate from local camera coordinates to 'global' screen coordinates.
  2. The multiplexer. Points that only appear in a single camera are trvial to map, but for points which appear in 2 or more cameras, the multiplexer must potentially merge them to a single point if they are close enough to be considered the same point. See e.g. the way the current multiplexer in Java works.
alathon commented 8 years ago

I discussed merging the 3 camera images into one large one with Anders and Søren, to avoid needing the multiplexer altogether. However, this has the serious downside of disallowing parallelization of blob detection and tracking. This downside is a show-stopper IMO, so I would advise continuing with this multiplexer as planned, so that each image can have blob detection/tracking run in parallel.