Closed squeakus closed 6 years ago
Hi
I am also interested in what is the best way to grab the frames in order to use them with opencv. I can't really manage to get that feed
Hey, I ported the keyboard code to opencv and set up a frame extraction method. The code can be accessed here: https://github.com/Ubotica/telloCV
Now you can find the link to https://github.com/Ubotica/telloCV from the TelloPy's README.md.
Hi, first of all thank you for this great library! I am writing some tracking code so that the tello will follow a green ball that is working with a standard opencv video stream. I would be lost without your example code.
I am building on your keyboard_and_video example, as best I understand it you set up a handler function and a thread in tello.py streams packets to it when they are received. The packets are 1460 bytes in length, these are then piped directly into the window using mplayer.
I was going to copy your video_effects implementation but it differs slightly in how it collects the frames. It creates the stream directly, wraps it in an av container and then decodes that.
Should I use two separate approaches for the frames or is there a way of parsing the packets in the videoHandler function into frames?
Any help would be greatly appreciated!