Closed jocacace closed 9 years ago
The telemetry data should be at 5Hz while video could be up-to 30Hz (typically you will get only first I-frame and say 10 P-frames). Do you have a piece of code I could look at? Do you use video?
Hi m3d, thank you for your reply. Actually I found a bug in my code, and yes, I can see telemetry data at 5Hz, even thought I am not able to read the complete data set ( for instance seems that I am not receiving battery or state data, but maybe is because I am testing the code without taking off, only using bebop in an Idle state).
My big problem now is with the image... I read the issue #3 , I am working with Linux but I am not able to use cvideo and I decided to try to read the bebop image. Even thought I just put a print into the image callback (so don't handle the frame image at all), seems that I receive a new frame every 1 second. Maybe I haven't understand good the image transmission protocol or the difference between I-frame and P-frame...
Here is the code that I am using to acquire the data (note that I started from "demo.py" program):
def videoCallback( frame, robot=None, debug=False ):
if frame:
print strftime("
![screenshot - 04122015 - 09 33 12 pm](https://cloud.githubusercontent.com/assets/10219563/7107163/d29e2e88-e15b-11e4-9be8-0109003ce5bb.png)
%S", gmtime()), " new image"
def demo( drone ):
print "demo ..."
drone.videoCbk = videoCallback
drone.videoEnable()
try:
drone.trim()
while 1:
drone.update( cmd=None )
except ManualControlException, e:
print
print "ManualControlException"
if drone.flyingState is None or drone.flyingState == 1: # taking off
drone.emergency()
drone.land()
if __name__ == "__main__":
if len(sys.argv) < 2:
print __doc__
sys.exit(2)
metalog=None
if len(sys.argv) > 2:
metalog = MetaLog( filename=sys.argv[2] )
if len(sys.argv) > 3 and sys.argv[3] == 'F':
disableAsserts()
drone = Bebop( metalog=metalog )
demo( drone )
print "Battery:", drone.battery
And here is an output (via shell) of the program:
As you can see, "new image" should be print every time a new image has been acquired... and it's printed each second.
Any suggestion? There is something wrong in my code?
Thank you
OK, first of all battery and many other values are reported only on change. I wanted to add ARCOMMANDS_ID_COMMON_SETTINGS_CMD_ALLSETTINGS, but I do not have Bebop now, i.e. how to test it. In your case is 1Hz video expected. I moved video assembly into Bebop class, which means that also parameter if to use only I-frame or also P-frames is now part of Bebop constructor. Use drone = Bebop( metalog=metalog, onlyIFrames=False ).
The motivation for I-frames only is that you can read/display them with OpenCV via temporary file and it is mostly sufficient to "get started" with your image processing.
Perfect, now everything is more clear! Thank you a lot for your explanation... so, sorry for my other question, but I am quite new of image decoding and stuff like this... This mean that if I want a complete image frame I have to wait 1 second. Is more fast the decoding of the image If I use something like cvideo?
In the original sdk there is a source code example that show how to display the on-board image using mplayer, and in that way seems more fast (like in the android app), but I am not able to understand how to use the acquired image and to implement my algorithm I need an image at leas at 5Hz...
You may try to replay automatically logged video packets via python ./play.py logs/navdata_150412_213255.log If you need at least 5Hz you have to deal with P-frames (there are typically 1 I-frame and 29 P-frames per second). You can probably try mplayer with https://github.com/robotika/katarina/blob/master/samples/video2stdout.py but make sure you redirect the stdout to pipe.
Just to be sure of the meaning (before close this issue :) )... with python ./play.py logs/navdata_150412_213255.log I can replay the video a posteriori, not while flying, right? I want to implement a visual servoing algorithm, similar that one implemented in demo.py example, where the drone should follow the red cup...
Right I will try to find some alternative for cvideo or make it working under Linux. p.s. I would use https://github.com/robotika/katarina/blob/master/behaviors/navbox.py as more advanced reference, because it uses multiprocessing and the image processing results are automatically logged
Ok, again thank you! In addition I will try to use cvideo on linux, or I will move on Windows!
Hi Guys,
I have a question about the navdata of the robot that come with the "Update" function. What is the framerate of the telemetry data? I am using the code but the update function provide me updated data at very slow frequency (something like 0.5 Hz)... In addition, do you know if there is a sdk documentation where all the kind of provided data are listed?
P.s I am using ubuntu 12.04 to run the code.
Thank you a lot for your support!