luxonis / depthai-hardware

Altium Designs for DepthAI Carrier Boards
MIT License
441 stars 118 forks source link

Long-Range Depth Model - OAK-D-LR #247

Open Luxonis-Brandon opened 2 years ago

Luxonis-Brandon commented 2 years ago

Pre-order on shop here image

Start with the why:

While the OAK-D Series 1 and 2 (here) cover medium depth ranges (from ~20cm to 16 meters), and the coming OAK-D-SR (short-range) series (USB https://github.com/luxonis/depthai-hardware/issues/241, PoE https://github.com/luxonis/depthai-hardware/issues/244) specializes in close-in depth (from 0 to 1 or 2 meters), OAK/DepthAI technology is actually capable of much longer depth sensing - and we've tested up to 350 meters (as below) when doing a 30cm stereo baseline:

image

And the only current way to accomplish this is to use OAK-FFC (e.g. OAK-FFC-3P or OAK-FFC-4P with modular cameras such as the OV9282; 1MP Global Shutter Grayscale, OV9782; 1MP Global Shutter Color, or AR0234; 2.3 MP Global Shutter Color.

This works, but is not "production ready" and dealing with FFC cables is just annoying. So although it is possible to make a wide-stereo-baseline + narrow-FOV stereo pair using the options above (as shown below) such a setup is larger and harder to integrated than is desirable in many cases.

image

For longer ranges, using the setups above, we've found that a stereo baseline of 15cm (2x the 7.5cm baseline of the OAK-D Series 2), coupled with variable optics, can cover quite a wide range of depth sensing needs. And also from this testing, we've found the AR0234 to be quite beneficial for long-range sensing, given its large pixel size (matching the OV9282) while having a higher resolution of 2.3MP, which effectively doubling the maximum depth sensing range compared to the OV9282.

The AR0234 also provides the benefit of supporting both global shutter grayscale (for maximizing low-light performance) and also global shutter color (for native pixel-aligned RGBD).

The desired maximum depth range various quite a bit per application - with some situations requiring 100 meters, others 200 meters, and some 300+ meters. (The furthers the Luxonis team has tested with the AR0234 is 350 meters.) Supporting M12-mount lenses in the design enables choosing optics with FOVs (fields of view) corresponding to the required sensing distance.

Move to the how:

Leverage existing DepthAI ecosystem support for AR0234 to implement a dual-AR0234 M12-mount OAK-D-LR.

Move to the what: [NEW; based on feedback below]

ynjiun commented 2 years ago

Welcome OAK-D-LR!

I would suggest to include following few features:

add polarizer filter lenses option: since this long range version primarily is for outdoor application, there might be glare caused by the windshield glass, or raining road surface reflection, etc. Adding polarizer might remove or attenuate the glare significantly.

keembay: wish this new product using latest 3rd Gen VPU!

Ship with 2 sets of lenses: I would suggest to ship this product with two sets of M12 rectilinear lenses: One set of Narrow HFOV (e.g. around 40 to 50 deg) and one set of Wide HFOV (e.g. around 80 to 120 deg). Also the unit should store manufacturing calibrated matrixes for these two sets of lenses. The shipping configuration for example could be preinstalled Narrow_FOV lenses and the default LensMode.Narrow_FOV, such that the rectifiedLeft/Right will assume to apply the Narrow_FOV stereo calibrated matrixes. If user would like to change the lenses to Wide_FOV, then in programming, after user changed the lenses themselves and then need to set LensMode.Wide_FOV to produce correct rectifiedLeft/Right output.

The reason of requesting to ship two lenses sets is a convenient product configuration for an end user like me. Such that I don't need to go out and do the rectilinear lenses sourcing myself and sometime cannot get a quality rectilinear lenses with low quantity like 2 lenses ; )

Synced Left/Right requirement: I am not sure the current OAK-D left/right sync requirement in terms of time difference. I would suggest that new product left/right sync frame time difference < 100us. If it can be less than 50us or smaller would be even better. When the left/right exposure start/end time not in sync, the disparity map will not be accurate for moving objects. The problem will be amplified particularly when the ego view is turning (with angular speed): for example, when a car is turning at the cross road, say at 90 deg/ 5.4 sec => 16.6 deg/sec, then at around 25 m away, there will be disparity error by 1 pix for every 1 ms of out of sync left/right exposure. The further the distance, the bigger the disparity error will be (at around 250m, then it will be amplified by 10X..)

Future product roadmap:

PoE: wish will have IP67 enclosure PoE version available soon.

Luxonis-Brandon commented 2 years ago

Thanks @ynjiun !

Add polarizer filter lenses option: since this long range version primarily is for outdoor application, there might be glare caused by the windshield glass, or raining road surface reflection, etc. Adding polarizer might remove or attenuate the glare significantly.

This is the beauty of M12. This makes it possible to simply buy M12 solutions that have this capability built-in.

keembay: wish this new product using latest 3rd Gen VPU!

Great point. We can likely design this using the OAK-SoM-Pro or OAK-SoM-MAX to have KeemBay support.

Ship with 2 sets of lenses: I would suggest to ship this product with two sets of M12 rectilinear lenses: One set of Narrow HFOV (e.g. around 40 to 50 deg) and one set of Wide HFOV (e.g. around 80 to 120 deg). Also the unit should store manufacturing calibrated matrixes for these two sets of lenses. The shipping configuration for example could be preinstalled Narrow_FOV lenses and the default LensMode.Narrow_FOV, such that the rectifiedLeft/Right will assume to apply the Narrow_FOV stereo calibrated matrixes. If user would like to change the lenses to Wide_FOV, then in programming, after user changed the lenses themselves and then need to set LensMode.Wide_FOV to produce correct rectifiedLeft/Right output.

Great suggestion on lens types/etc. and agreed on the FOVs here. One catch is that once the lenses are removed, the calibration is invalidated. So probably what is best here is to have just 2 different purchase options as the default for this model, such that they can come pre-calibrated. And then there's a decent chance that for a given purpose these might be "Good enough". And then a no-lens option for folks who need their own lenses (so as to not waste lenses/etc.)

  1. Narrow FOV (e.g. around 40 to 50 deg), lenses pre-installed and lock-ringed in place (likely with breakable staking), factory calibrated.
  2. Wide FOV (e.g. around 80 to 120 deg), lenses pre-installed and lock-ringed in place (likely with breakable staking), factory calibrated.
  3. No lenses. Lock-rings included in gift box.

We'll then also sell lenses on our site, just like our friends OpenMV do, here. Note they even have the polarization filter as well, which would very likely work here (not yet tested of course, as we haven't even started work on OAK-D-LR yet).

The reason of requesting to ship two lenses sets is a convenient product configuration for an end user like me. Such that I don't need to go out and do the rectilinear lenses sourcing myself and sometime cannot get a quality rectilinear lenses with low quantity like 2 lenses ; )

Agreed. Good points. Doing them as separately-orderable options will save cost. And it's required for factory-calibration to be applied and usable.

Synced Left/Right requirement: I am not sure the current OAK-D left/right sync requirement in terms of time difference. I would suggest that new product left/right sync frame time difference < 100us. If it can be less than 50us or smaller would be even better. When the left/right exposure start/end time not in sync, the disparity map will not be accurate for moving objects. The problem will be amplified particularly when the ego view is turning (with angular speed): for example, when a car is turning at the cross road, say at 90 deg/ 5.4 sec => 16.6 deg/sec, then at around 25 m away, there will be disparity error by 1 pix for every 1 ms of out of sync left/right exposure. The further the distance, the bigger the disparity error will be (at around 250m, then it will be amplified by 10X..)

Yes. We'll do hardware sync here. My understanding of the OAK-D sync is it's likely in the sub-1-microsecond range. It gets very hard to actually test that though in terms of pixels. But as far as our testing has revealed (leveraging MIPI readouts/etc.) we're at least under a couple microseconds. And the same would be true here.

PoE: wish will have IP67 enclosure PoE version available soon.

Agreed. We'll do a PoE as well. I actually think PoE will be more important here. USB is a tad quicker to get out to test optics/etc. so we'll simply do that first - to learn the unknown unknown here in terms of the discovery that always occurs on a new product like this.

Thanks again! -Brandon

stephansturges commented 2 years ago

I would be very interested in this product as well, especially the PoE version with an IP67 enclosure.

On the subject of filters and mechanics I would love the option to order the camera with a set of two "front plates", with the glass on one of these plates incorporating a polarization layer directly if that's possible.

Or alternatively (but more complicated) having a gasket-sealed "hatch" on the side of the assembly that would allow to slide a filter in front of the 3 lenses, held in place by a groove for example. This could then also allow the use of other filters for specific wavelengths of light or ND filters to reduce exposure without mounting anything to the outside of the sensor.... but I realise this second option is more complicated as the use of filters would depend on the size of the lenses and sensors on the board.

ynjiun commented 2 years ago

@Luxonis-Brandon

Yes. We'll do hardware sync here. My understanding of the OAK-D sync is it's likely in the sub-1-microsecond range. It gets very hard to actually test that though in terms of pixels. But as far as our testing has revealed (leveraging MIPI readouts/etc.) we're at least under a couple microseconds. And the same would be true here.

Question: is that possible to sync all 3 cameras including the center 4K camera with the other two stereo cameras?

Center 4K camera position: for this 15cm baseline version, I would recommend the center 4K camera position at right side with distance to right camera of 5cm and distance to the left camera of 10cm. Plus if possible to sync all three cameras, then we might have a stereo system with 3 baselines: 15cm, 10cm, and 5cm in operation simultaneously. If the software and keembay hardware throughput can handle 3 pairs of stereo disparity computation, then we could have a hybrid system that fuse 3 disparity maps together to generate a much higher accuracy disparity map. What do you think? I would happy to join the prototyping work on this front if you can send me an early prototype ; ))

Luxonis-Brandon commented 2 years ago

On the subject of filters and mechanics I would love the option to order the camera with a set of two "front plates", with the glass on one of these plates incorporating a polarization layer directly if that's possible.

I love this idea! We will do this for sure.

And yes, I do like the idea of like separate front covers for this. So that they can still be sealed.

Thanks, Brandon

Luxonis-Brandon commented 2 years ago

For this one we were thinking only 2x AR0234 global shutter color 2.3MP. So no 4K rolling shutter at all. Thoughts on that?

stephansturges commented 2 years ago

For my applications rolling sur if any kind is a no-go, and 2MP global shutter sensors would actually be ideal for balance between light sensitivity and quantity of data :)

Stephan Sturges

On Thu, May 26 2022 at 6:04 PM, Luxonis-Brandon < @.*** > wrote:

For this one we were thinking only 2x AR0234 global shutter color 2.3MP. So no 4K rolling shutter at all. Thoughts on that?

β€” Reply to this email directly, view it on GitHub ( https://github.com/luxonis/depthai-hardware/issues/247#issuecomment-1138735751 ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/AE3BDJR2UTXF7RIS4HSFUI3VL6OILANCNFSM5V6OC6ZQ ). You are receiving this because you are subscribed to this thread. Message ID: <luxonis/depthai-hardware/issues/247/1138735751 @ github. com>

gtech888AU commented 2 years ago

Brandon, Have you considered adding a design feature which would allow users to be able to modify the camera angles to face inwards rather than being forced to keep them parallel. This would open up usage of this model across different depth ranges (both near and far) due to its ability to customize the lens into much narrower FOVs if desired - to date, this is something your other cameras besides FFC all lack.

Luxonis-Brandon commented 2 years ago

Hi @gtech888AU ,

So actually for stereo disparity depth having cameras angled-in prevents the disparity depth from working properly. As with disparity depth the key is for the cameras to see the information from the same vantage point, just displaced. And with angled cameras, one camera will see the left side of an object, and the other will see the right - and so feature matching won't work.

That and any angle between the sensors reduces FOV of the depth. So it's a disadvantage to do so all around.

Hi @stephansturges,

Thanks, makes sense. And PS I edited your post to remove private information.

Thanks all, Brandon

ynjiun commented 2 years ago

For this one we were thinking only 2x AR0234 global shutter color 2.3MP. So no 4K rolling shutter at all. Thoughts on that?

I think no 4K rolling shutter is fine.

Then I would suggest to enhance this unit with an extra AR0234 global shutter (thus total of 3x AR0234) at the following position:

left <- 1.5 cm -> middle <- 13.5cm -> right

So the left/right baseline is 15cm And middle/right baseline is 13.5cm And left/middle baseline is 1.5cm

All three cameras are in h/w sync.

Luxonis-Brandon commented 2 years ago

YES! That's a great idea @ynjiun . As that gives both long-range and not-long-range depth out of the same model.

And with that idea I'm now pondering what the "right" distribution of baselines is.

My gut is something like the following might be super useful.

left <- 5cm -> middle <- 10cm -> right

As this gives the 3 following baselines:

  1. 15cm
  2. 10cm
  3. 5cm

The why of this distribution is that the "middle" baseline of 13.5cm and widest baseline of 15cm feel very close in terms of usable range. And likely are. Whereas with something like this spread, the "middle" baseline is a bigger jump from the widest baseline, with 10cm covering a more-different depth range than 15cm.

And then similar to 5cm, that can allow fairly-close in depth compared to both 10cm an 15cm.

Thoughts?

Thanks, Brandon

ynjiun commented 2 years ago

@Luxonis-Brandon

left <- 5cm -> middle <- 10cm -> right

Agree. This would definitely address those who need different baseline in a meaningful seperation.

Luxonis-Brandon commented 2 years ago

Fascinating. Thanks. I just updated the specs at the very top with the culmination of discussions here. Added [NEW] for things that came from the discussion.

stephansturges commented 2 years ago

I would love to know if there is already a tentative timeline attached to this project?

Luxonis-Brandon commented 2 years ago

Unknown as of yet. We're triaging internally on when we can implement this. We'll circle back when we have a better view of schedule.

We're also triaging interest on this vs. other models. So this helps that there's interest. And if you want to pre-order, we can get you links for that. :-). Helps with customer voting. ;-)

microaisystems commented 2 years ago

@Luxonis-Brandon

Yes, send me the pre-order link.

Luxonis-Brandon commented 2 years ago

Thanks @microaisystems . Will get it up now. :-)

Luxonis-Brandon commented 2 years ago

It's live here: https://shop.luxonis.com/products/oak-d-lr-pre-order

(The LR will have to be quite a bit more expensive because it has the (expensive) 2.3MP global shutter color and (expensive) large/nice optics, which then also make the whole thing bigger (and more expensive).)

stephansturges commented 2 years ago

This is like computer vision catnip to me... I've ordered 2 units πŸ˜„

ynjiun commented 2 years ago

@Luxonis-Brandon

Great! Before I order d-lr, couple questions: would this pre-ordered unit shipped with Myriad X or KeemBay? would this unit have Raspberry Pi Module 4 integrated as well? Thank you.

Erol444 commented 2 years ago

Hi @ynjiun, We will likely design the OAK-D-LR with the OAK-SOM-PRO, so users will be able to choose either the Myriad X (BW2099) version or KeemBay (DM2399) version:)! Thoughts? Thanks, Erik

stephansturges commented 2 years ago

The option to use either module would be very interesting to me, but especially if they are user-replaceable. ie: I really want to get working with the Keem Bay architecture but there are things I would like to test immediately which I know will work on the Movidius version without breaking any of my code. So the ideal scenario would be to have the option to "upgrade" to the leem bay by swapping out a module, if at all possible.

Erol444 commented 2 years ago

Hi @stephansturges, I believe these would indeed be user replaceable! As we designed KeemBay version of OAK-SOM-PRO so it's backwards compatible with the MyriadX version of OAK-SOM-PRO:) Thanks, Erik

stephansturges commented 2 years ago

That's ideal, thanks.

stephansturges commented 2 years ago

Hey guys! I see on the product page in the store that you are targeting Jan 2023, is this still correct? If there's any update ton the timeline I'd love to know more :)

Luxonis-David commented 1 year ago

Quick update on the project, we have sent the design to FAB house and we are estimating to get back the first prototypes in a month or so. Below are a few renders a sneak peak to what we are working on. Please note those renders does not necessarily represent the look of the final product and any changes can be made without prior notice. image image (104) image (103)

stephansturges commented 1 year ago

Looking good! I can't wait to use this.

chad-green commented 1 year ago

YES! That's a great idea @ynjiun . As that gives both long-range and not-long-range depth out of the same model.

And with that idea I'm now pondering what the "right" distribution of baselines is.

My gut is something like the following might be super useful.

left <- 5cm -> middle <- 10cm -> right

As this gives the 3 following baselines:

  1. 15cm
  2. 10cm
  3. 5cm

The why of this distribution is that the "middle" baseline of 13.5cm and widest baseline of 15cm feel very close in terms of usable range. And likely are. Whereas with something like this spread, the "middle" baseline is a bigger jump from the widest baseline, with 10cm covering a more-different depth range than 15cm.

And then similar to 5cm, that can allow fairly-close in depth compared to both 10cm an 15cm.

Thoughts?

Thanks, Brandon

@Luxonis-Brandon the renderings look pretty awesome. What did you guys decide on for the baselines?

Luxonis-David commented 1 year ago

@chad-green we used 5cm for shorter and 15cm for longer baseline

cafemoloko commented 1 year ago

Here's a sneak peek of the OAK-D-LR enclosure. However, this is not a completely assembled prototype, and the team is working on fine-tuning it.

image (3) image (2) image

stephansturges commented 1 year ago

This made my day @cafemoloko ! I'm so excited to start playing with this one!

Luxonis-David commented 1 year ago

Adding 3D CAD file here so one can start tinkering on the integration of the OAK-D-LR into bigger system (design is not final and is a subject to change during development process). OAK-D-LR_TopAssy-24NOV2022.zip

Alex-Beh commented 1 year ago

May I know where can I find the specifications? Looking forwards to try this amazing product!!!

Luxonis-David commented 1 year ago

@Alex-Beh some of the specs are written in the original post of this issue, I am pasting and adding more below:

ynjiun commented 1 year ago

@Luxonis-David

"OAK-SoM-Pro based"

could we ship with OAK-SoM-Max module? is it compatible? I would like to try the new module with OAK-D-LR. Please advise.

Thanks.

Luxonis-David commented 1 year ago

@ynjiun no unfortunately it won't be compatible with OAK-SoM-MAX. We did on the other hand design OAK-TBot which is compatible with OAK-SoM-MAX, we just released the design for fabrication so there is still a long way (two months) to go before we will have it officially supported but it will be soon in the Early access store so you can reserve one from the first batch.
It comes with a mount and it features the following:

The purpose of this product is that it would cover logistics and light manufacturing. The idea is to have an easy-to-setup and β€œjust works” camera that non-engineers can use for things such as:

See below attached renders: image

stephansturges commented 1 year ago

Quick question: do you have any news on delivery dates for the pre-orders of the LR model?

Luxonis-David commented 1 year ago

Hi @stephansturges,

Good question, OAK-D-LR is currently being assembled in one of our offices and we can start shipping in around 14-days. We do need to add a support for new "dual" bootloader on as OAK-D-LR exposes both USB and ETH ports, this is also the reason for above ETA.

alexivins commented 1 year ago

@Luxonis-David Hi David, any chance you could send the depth coverage for the OAK-TBot? I am looking to see what type of coverage for placing camera at a height of .3 meter to .5 meters (12 inches to 22 inches). I am assuming based on cameras this has Depth capability. Could you also share details on the mount as well. Also what is the price point for this one? Thank you.

ynjiun commented 1 year ago
  • Triple AR0234 2.3MP global shutter color sensor (DFOV 96Β°)

With this DFOV 96Β°, what would be the depth range for 15cm baseline pair?

Could we ship with a smaller FOV option, for example: FOV (D/H/V) 50Β°/40Β°/30Β° to get farther depth range?

Thanks.

Luxonis-David commented 1 year ago

@Luxonis-David Hi David, any chance you could send the depth coverage for the OAK-TBot? I am looking to see what type of coverage for placing camera at a height of .3 meter to .5 meters (12 inches to 22 inches). I am assuming based on cameras this has Depth capability. Could you also share details on the mount as well. Also what is the price point for this one? Thank you.

The product is still under the design process and I can't share the details about it at the moment. On OAK-TBot we expect that for a start the same stereo cameras will be used as on OAK-D-Pro so NFOV OV9282 with bandpass filter for model with active illumination and with cutoff filter for model with no IR illumination camera FOV (D/H/V) | 89.5Β°/80Β°/55Β°, with horizontal baseline 7.5cm and vertical 2.5cm. You might be able to calculate the depth coverage based on that IMO but note that anything of the above might be subject of change after we test out the first prototypes and find what would be best. About the mount it is also not finalized, we expect having several different mounts, besides the one on the photo above we do plan on having one meant for more "rigid" mounting. image

I will follow up here when we do have more details about the asked.

Luxonis-David commented 1 year ago
  • Triple AR0234 2.3MP global shutter color sensor (DFOV 96Β°)

With this DFOV 96Β°, what would be the depth range for 15cm baseline pair?

Could we ship with a smaller FOV option, for example: FOV (D/H/V) 50Β°/40Β°/30Β° to get farther depth range?

Thanks.

Yes we can, but MOQ and extended lead time would probably apply in order to do custom FOV lens assembly. For the lead time in such cases we do normally quote 8 weeks by default, but it depends on if we do already have a batch in production and we can make a custom assembly out of the same batch to shorten the lead time to just few days/weeks.

ynjiun commented 1 year ago

Hi @Luxonis-Brandon

Perhaps it's time to talk about the software support of this new product (OAK-D-LR). The following are few thoughts and questions would like to get your feedback:

Architecture: 3 pairs of disparity computation: LR has 3 pairs of stereo: right/left (baseline=15cm), right/middle (baseline=5cm) and middle/left(baseline=10cm). Questions:

  1. can we support StereoDepth pipeline of 3 pairs at the same time (concurrently)? That is to provide a pipeline API to connect left/middle/right camera to pipe into an API as below:

               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
               β”‚                   β”‚ confidenceMap
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β”‚                   β”‚rectifiedLeft
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
    left           β”‚                   β”‚   syncedLeft
    ──────────────►│-------------------β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β”‚                   β”‚rectifiedMiddle
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
    middle         β”‚                   β”‚   syncedMiddle
    ──────────────►│-------------------β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β”‚                   β”‚   depth [mm] (right/left, right/middle, middle/left)
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β”‚    StereoDepth    β”‚    disparity (right/left, right/middle, middle/left)
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
    right          β”‚                   β”‚   syncedRight
    ──────────────►│-------------------β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β”‚                   β”‚rectifiedRight
               β”‚                   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
    inputConfig    β”‚                   |     outConfig
    ──────────────►│-------------------β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Ί
               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  2. what would be the throughput/bandwidth estimate to support left/middle/right StereoDepth API? can we still have >= 30 fps for 1280x800 resolution?

  3. can we still support -dumpdispcost API?

  4. is that possible to provide a call-back API to allow user to define a function to fuse 3 dumpdispcost says (3,height,width,96) into 1 dumpdispcost (1,height,width,96) and pipe it back to compute the final depth and disparity? If this is possible, then the above API achitecture can be simplified to a single depth and disparity output instead of 3X each.

Could we have a zoom or separate email discussion on this?

Please advise. Thanks a lot for your help.

Luxonis-David commented 1 year ago

@themarpe would you be able to give some answers to above already?

themarpe commented 1 year ago

Hi @ynjiun

One will be able to place 2 StereoDepth nodes to achieve stereo between all 3 cameras at the same time. The performance remains inline with maximum capabilities that Stereo on RVC2 can be done, but its split between these two instances + some overheads.

Yes, >= 30fps should be possible but depending on what Stereo mode is selected. If only regular 64 level disparity is taken that should be the case. Refer to the following for figures: https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/#stereo-depth-fps and roughly divide by 2 + some overhead.

dumpdispcost remains the same, you'd just be using 2 StereoDepth nodes

You MAY create such compute on host though the currently exposed API as it stands, though we current don't support inputing dispcostdump CC: @szabi-luxonis


Feel free to reachout to support at luxonis dot com for any specific private inquiries and link to your comment for context.

Thanks!

cafemoloko commented 1 year ago

new photos

PXL_20221208_074229648 PXL_20221208_073920126 PXL_20221208_074149506 PXL_20221208_073850701 PXL_20221208_073730462 PXL_20221208_073632083 PXL_20221208_073542150

Luxonis-David commented 1 year ago

@alexivins please check on a separate issue for OAK-TBot #326. We can move further discussion there.

ynjiun commented 1 year ago

One will be able to place 2 StereoDepth nodes to achieve stereo between all 3 cameras at the same time. The performance remains inline with maximum capabilities that Stereo on RVC2 can be done, but its split between these two instances + some overheads.

Hi @themarpe

Thank you for the information.

Couple more questions:

  1. "place 2 StereoDepth nodes": is there a limit on how many StereoDepth nodes we can place? for example, could we place 3 nodes (right/left pair, right/middle pair and middle/left pair) concurrently?
  2. would RVC3 provide higher throughputs for StereoDepth? by how much? does it support 1920x1080 resolution? Do we have a RVC3 performance table similar to RVC2 table you provided?

Thank you for your help.

SzabolcsGergely commented 1 year ago

One will be able to place 2 StereoDepth nodes to achieve stereo between all 3 cameras at the same time. The performance remains inline with maximum capabilities that Stereo on RVC2 can be done, but its split between these two instances + some overheads.

Hi @themarpe

Thank you for the information.

Couple more questions:

  1. "place 2 StereoDepth nodes": is there a limit on how many StereoDepth nodes we can place? for example, could we place 3 nodes (right/left pair, right/middle pair and middle/left pair) concurrently?

  2. would RVC3 provide higher throughputs for StereoDepth? by how much? does it support 1920x1080 resolution? Do we have a RVC3 performance table similar to RVC2 table you provided?

Thank you for your help.

  1. No implicit limit.
  2. Yes, it has higher throughput. Maximum 1280 width is supported. We don't have public performance table.
ynjiun commented 1 year ago

We will likely design the OAK-D-LR with the OAK-SOM-PRO, so users will be able to choose either the Myriad X (BW2099) version or KeemBay (DM2399) version:)! Thoughts?

I already pre-ordered OAK-D-LR. How do I notify Luxonis to ship my order with KeemBay (DM2399)? Please advise. Thanks.