arenaxr / arena-web-core

An environment to view and interact in multiuser virtual and augmented reality.
https://docs.arenaxr.org
BSD 3-Clause "New" or "Revised" License
37 stars 28 forks source link

Refactor: CV pipeline #612

Open hi-liang opened 4 months ago

hi-liang commented 4 months ago

Currently armarker system initializes the device-depenedent cv pipeline exclusively to the apriltag detector

What should happen is that the cv pipeline belongs to its own system which allows (potentially multiple) cv subscribers for frame data. This will allow each subscriber to pull frames as often as needed

Probably implement w/ event being emitted every frame (unknown effect on timing as these enter global async queue) with a getter function for last pose-synced frame.

nampereira commented 4 months ago

Yup, sounds good! At the time, I thought about it, but there was no real need for it.

The pipeline was sorta created with it in mind, but I admit there might be some issues to think through. The biggest one, I think, is that the current pipeline relies on a single worker to signal "I'm done, please get another frame" and only has a single buffer that is passed between camera capture and the worker.

Otherwise, each camera capture mechanism (in camera-capture) has a setCVWorker that could be turned into an addCVWorker to a list of workers. Then, when a new frame is available, the postMessage could send the pixels to several workers (I think the underlying buffer allows sharing). The issue is really how to trigger the next frame capture when we have multiple workers.

hi-liang commented 4 months ago

That's a good point, and I think the array is exclusive to the main thread or a single worker at either stage. So different workers can't simultaneously share the same resource since its zero-copy transfer of the ArrayBuffer, they would be blocking to each other and the next camera frame.

So subscription could be more on a per-frame level.

1 worker:

  1. WorkerA: getFrame msg, adds to buffer dispatch queue
  2. Main - readPixels to Buffer
  3. Main - Dispatch Buffer to: workerA
  4. WorkerA: return Buffer (repeat)

With 2 workers:

  1. WorkerB: getFrame msg
  2. WorkerA: getFrame msg (might actually get in anytime from before Main's readPixels process until Buffer comes back from WorkerB)
  3. Main - readPixels to Buffer
  4. Main - Dispatch Buffer to: workerB
  5. WorkerB: return buffer
  6. WorkerB: getFrame Message (need to determine this should be nextQueue since it has already seen this frame)
  7. Main - Dispatch Buffer to: workerA (currentQueue is not empty because of waiting workerA, so keep dispatching current buffer)
  8. WorkerA: return Buffer
  9. Main - currentQueue is empty, nextQueue is not empty, get another frame and advance queues
nampereira commented 4 months ago

Yeah think that should work, particularly if workers all use the shared frame for a similar amount of time. If all workers are Wasm, it makes sense to always do something like this: copy into its memory, let camera capture pass the frame along and only then process.

For example, we could split detect(...) in the current worker to copy the buffer to wasm land and release it before processing (processing is done when we call detect on the wasm module with this._detect.