softbankrobotics-research / qibullet

Bullet simulation for SoftBank Robotics robots
Apache License 2.0
145 stars 38 forks source link

Adding a video Source in the simulation environment #67

Closed SutirthaChakraborty closed 3 years ago

SutirthaChakraborty commented 3 years ago

Hi, I am new to this. How can I use a video to feed into the Nao's Camera and its music in the microphone sensor? This might be a basic question. Sorry for that. Thanks, Suti

mbusy commented 3 years ago

Hi Suti, I'm not so sure that I understand your question:

How can I use a video to feed into the Nao's Camera

By default, the camera of the virtual NAO is a simulated one, and will return images captured in the simulation (see here for more information). From what I understand you want to stream your own video data in the simulation ?

and its music in the microphone sensor?

The answer to this one is easier :smile: The simulated robot doesn't have microphones or speakers, so that's not possible at the moment

SutirthaChakraborty commented 3 years ago

Yes, I want to stream my video data. How to create the environment for that? Will there be any sooner release with audio data as well? Or any other alternative to achieve that?

mbusy commented 3 years ago

I don't think that there is a nice and easy way to do that, you could always try to create a wall (that would act as a screen), segment your video into successive images and change the texture of the wall to match each image, one after the other. (that's quite dirty though)

Regarding the audio, we probably won't add that to the simulator, at least not soon, since the audio is not handled by the pybullet APIs.

What are you actually trying to achieve by streaming that into the simulation? Depending on your goal, using another simulator (such as morse or webots) might be more relevant

SutirthaChakraborty commented 3 years ago

Thank you for your response. I am trying to simulate an orchestra environment Video Link. I want the humanoid bot to react with this.

mbusy commented 3 years ago

Ok, and I guess that you want the robot to capture that to take into account the specificities of its sensors (eg camera fov, resolution, mic sensitivity etc) before processing the video and the audio?

SutirthaChakraborty commented 3 years ago

Yes, any tips how it can be achieved? Sorry I am very new with this kind of simulation,

mbusy commented 3 years ago

Alright, I see! The camera of the robot will have a correct fov, resolution and intrinsic parameters, but there will still be a reality gap between the simulated camera and the real one (the real camera will render noisier images).

My guess is that if you really want to take into account the model of the sensors, you should maybe use a different simulator, that handles that kind of stuff more accurately (you can take a look at ignition, or webots). I think that there is already a NAO robot model available in webots

SutirthaChakraborty commented 3 years ago

Okay, Thank you so much for all your suggestions. I will have a look at them.