Incredible work on AdvantageScope, and thank you for your massive contributions to the FIRST community!
Vision simulation and debugging is still a fairly immature area in FRC. PhotonVision supports basic simulation (with wireframe "camera" views). I've seen some teams use an AdvantageScope rendered 3D camera view with a screen grabber (like OBS) as a simulated camera for use in PhotonVision. Might this use case be something that could be supported natively in AdvantageScope?
Ideally, the frames of a rendered camera view in AdvantageScope could be published (perhaps as an MJPEG server?) so that they are accessible from vision code running locally in simulation. I'm guessing that the latency in such a setup might be comparable with a real camera / co-processor system, but that latency can also be adjusted in PhotonVision simulation. (I suppose there could be specific implementations for PhotonVision, but that would obviously require additional coordination with the PhotonVision team.)
Incredible work on AdvantageScope, and thank you for your massive contributions to the FIRST community!
Vision simulation and debugging is still a fairly immature area in FRC. PhotonVision supports basic simulation (with wireframe "camera" views). I've seen some teams use an AdvantageScope rendered 3D camera view with a screen grabber (like OBS) as a simulated camera for use in PhotonVision. Might this use case be something that could be supported natively in AdvantageScope?
Ideally, the frames of a rendered camera view in AdvantageScope could be published (perhaps as an MJPEG server?) so that they are accessible from vision code running locally in simulation. I'm guessing that the latency in such a setup might be comparable with a real camera / co-processor system, but that latency can also be adjusted in PhotonVision simulation. (I suppose there could be specific implementations for PhotonVision, but that would obviously require additional coordination with the PhotonVision team.)
Thanks for considering!