Closed fanman2014 closed 3 years ago
Hi Mark,
You'll need to build an app that implements a socket server to grab sound source energy directions from ODAS localization module (SSL). This app will also grab frames from the webcam.
Then you'll want to project the sound energy in the camera image using a mathematical camera model such as the pinhole camera model. Basically, trigonometry. Keep in mind you'll need to know the distance between the sound sources and the camera to get a valid projection. This will need to be assumed or measured by other means, as ODAS only gives you directions.
As for the camera menu, it is a leftover from an internal project at the lab. Its purpose was to open a network camera stream. The code for this may be still laying around somewhere in the project.
Cedric
Thanks Cedric,
I guess the best place for me to start is using Python to bring in the source energy directions and then add it to a video stream. Is there any guidance or examples available for me to grab the data from the socket server?
cheers
Mark.
Sure thing, IntRoLab (the research lab behind this project) built a ROS client to use ODAS as a sensor for a robot. It is written in Python and I don't think you need experience with ROS to catch what is going on.
I suggest you take a look at this script which implements the server. The data coming out of ODAS is human readable JSON so you can print it to a terminal to easily see the data structure.
If Javascript is more your thing you can take a look at the server implementation of ODAS Studio here.
All sorted now thanks.
Hi,
I wish to superimpose the sound locations onto a live webcam stream. In other words, I would like the peak sound energy locations to be located on a live video stream showing the sound source. What is the best way to do this?
BTW, I also noticed that the app has a "camera" menu item. What does this really do? When I select it nothing happens.
Look forward to your help.
thanks
Mark.