Open Aldarme opened 1 year ago
I am also having this issue.
try to resize:
from djitellopy import tello
import cv2
me= tello.Tello()
me.connect()
print(me.get_battery())
me.streamon()
while True:
img=me.get_frame_read().frame
img=cv2.resize(img,(360,240))
#img = cv2.flip(img,0)
cv2.imshow("Image", img)
cv2.waitKey(1)
I already try that solution, but that do not solve the problem. I specifically have this issue with this API. When I use the API from robomaster, I do not have such issue. Best,
Yeah I have tried resizing too. The issue is that the self.frame
inside the BackgroundFrameRead
seems to always be 3-4 seconds behind. So if you just save the video to file, it looks smooth. However if you display it (any size) the frame you are displaying is 3-4 seconds behind.
@Aldarme where is the robotmaster API? It might help us isolate the issue if we can compare how they handle the video, compared to this library.
Have you tried using the library straight from the latest master branch? Don't use the one from pypi or release as they are not the updated one.
Yes I am using the latest on master. I am not installing it from pypi.
I am using the latest with the updates that use BackgroundFrameRead
and that uses pyav
to decode the messages. It seems quite buggy and laggy. If you use it as it you have the 3-4 second lag. If you lower the framerate to 5FPS then pyav
throws and exception that there are not enough frames, and you have to change some of the code to stop it from throwing that error (but that is a separate issue, and so I will not discuss it more here). I am mostly worried about the 3-4 second lag.
@hildebrandt-carl Here it is: https://robomaster-dev.readthedocs.io/en/latest/python_sdk/beginner_drone.html#initialize-the-robot
@Aldarme thanks, I will take a look at their implementation and see if I can figure out what they do to get rid of the lag.
Okay so I have a non lagging version. Most of the work is based off of the RoboMaster API. However I removed large amounts and made it work with my implementation. For anyone interested how is how you can get it to work:
You first need a VideoConnection
class which handles the communication between the Tello Drone and your computer. It listens to the UDP sever and saves all the data inside a queue.
import time
import queue
import socket
import threading
class VideoConnection(object):
def __init__(self):
self._sock = None
self._sock_queue = queue.Queue(32)
self._sock_recv = None
self._recv_count = 0
self._receiving = False
def __del__(self):
if self._sock:
self._sock.close()
def connect(self, camera_ip: str="0.0.0.0", camera_port: int=11111):
try:
self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self._sock.bind((camera_ip, camera_port))
print("OVER")
except Exception as e:
print("StreamConnection: connect addr {0}:{1}, exception {2}".format(camera_ip, camera_port, e))
return False
self._sock_recv = threading.Thread(target=self._recv_task)
self._sock_recv.start()
print("StreamConnection {0} successfully!".format(camera_ip))
return True
def disconnect(self):
self._receiving = False
self._sock_queue.put(None)
if self._sock_recv:
self._sock_recv.join()
self._sock.close()
self._sock_queue.queue.clear()
self._recv_count = 0
print("StreamConnection: disconnected")
def _recv_task(self):
self._receiving = True
print("StreamConnection: _recv_task, Start to receiving Data...")
while self._receiving:
try:
if self._sock is None:
break
data, addr = self._sock.recvfrom(4096)
if not self._receiving:
break
self._recv_count += 1
if self._sock_queue.full():
print("StreamConnection: _recv_task, sock_data_queue is full.")
self._sock_queue.get()
else:
self._sock_queue.put(data)
except socket.timeout:
print("StreamConnection: _recv_task, recv data timeout!")
continue
except Exception as e:
print("StreamConnection: recv, exceptions:{0}".format(e))
self._receiving = False
return
def read_buf(self, timeout=2):
try:
buf = self._sock_queue.get(timeout=timeout)
return buf
except Exception as e:
print("StreamConnection: read_buf, exception {0}".format(e))
return None
Then you need a VideoHandler
class. This class takes data from the queue generated by the VideoConnection
and converts it using an h264 decoder.
import queue
import threading
import libmedia_codec
import numpy as np
from video_connection import VideoConnection
class VideoHandler(object):
def __init__(self, camera_ip: str="0.0.0.0", camera_port: int=11111):
# Init variables
self._video_frame_count = 0
self._video_streaming = False
# Create the video queue
self._video_frame_queue = queue.Queue(64)
# Create the video decoder
self._video_decoder = libmedia_codec.H264Decoder()
# Turn on the video converter
self._video_stream_conn = VideoConnection()
self._video_stream_conn.connect(camera_ip, camera_port)
self._video_decoder_thread = threading.Thread(target=self._video_decoder_task)
self._video_decoder_thread.start()
self._x = 0
def _h264_decode(self, data):
res_frame_list = []
frames = self._video_decoder.decode(data)
for frame_data in frames:
(frame, width, height, ls) = frame_data
if frame:
frame = np.fromstring(frame, dtype=np.ubyte, count=len(frame), sep='')
frame = (frame.reshape((height, width, 3)))
res_frame_list.append(frame)
return res_frame_list
def _video_decoder_task(self):
self._video_streaming = True
print("_video_decoder_task, started!")
while self._video_streaming:
data = b''
buf = self._video_stream_conn.read_buf()
if not self._video_streaming:
break
if buf:
data += buf
frames = self._h264_decode(data)
for frame in frames:
try:
self._video_frame_count += 1
if self._video_frame_count % 30 == 1:
print("video_decoder_task, get frame {0}.".format(self._video_frame_count))
self._video_frame_queue.put(frame, timeout=2)
except Exception as e:
print("_video_decoder_task, decoder queue is full, e {}.".format(e))
continue
print("_video_decoder_task, quit.")
def read_video_frame(self, timeout=3, strategy="newest"):
if strategy == "pipeline":
return self._video_frame_queue.get(timeout=timeout)
elif strategy == "newest":
while self._video_frame_queue.qsize() > 1:
self._video_frame_queue.get(timeout=timeout)
return self._video_frame_queue.get(timeout=timeout)
else:
print("read_video_frame, unsupported strategy:{0}".format(strategy))
return None
You then can call the read_video_frame
function with the strategy set to newest and you will always have the latest frame. I am now getting near perfect video performance, with no lag at all. One thing you will need to do is install libmedia_codec
. To do that go on the RoboMaster API, go into the lib/libmedia_codec
folder and install the python package using:
python3 -m pip install .
Best of luck.
@hildebrandt-carl Hey, you produce an impressive work ! Can you consider doing a pull request about your work ?
Sure I have moved away from using this library and started writing my own. However it shouldn't take me too long to create a pull request for this library. I will try get it done before the end of the weekend.
Could you create that PR? @hildebrandt-carl
Hello, sadly i cant build libmedia_codec,i get this error:
Processing c:\ikarus\ansteuerung\000testgithubsol\robomaster-sdk\lib\libmedia_codec
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: libmedia-codec
Building wheel for libmedia-codec (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for libmedia-codec (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [67 lines of output]
running bdist_wheel
running build
running build_ext
<string>:44: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
CMake Error at CMakeLists.txt:4 (project):
Generator
NMake Makefiles
does not support platform specification, but platform
x64
was specified.
-Configuring incomplete, errors occurred!
See also "C:/Ikarus/Ansteuerung/000TestGithubSol/RoboMaster-SDK/lib/libmedia_codec/build/temp.win-amd64-cpython-310/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "C:\Users\ls15\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 351, in <module>
main()
File "C:\Users\ls15\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\ls15\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 249, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\build_meta.py", line 413, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\build_meta.py", line 398, in _build_with_temp_dir
self.run_setup()
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 91, in <module>
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\wheel\bdist_wheel.py", line 325, in run
self.run_command("build")
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\command\build.py", line 132, in run
self.run_command(cmd_name)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "C:\Users\ls15\AppData\Local\Temp\pip-build-env-3swhnhj9\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "<string>", line 49, in run
File "<string>", line 77, in build_extension
File "C:\Users\ls15\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\\Ikarus\\Ansteuerung\\000TestGithubSol\\RoboMaster-SDK\\lib\\libmedia_codec', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Ikarus\\Ansteuerung\\000TestGithubSol\\RoboMaster-SDK\\lib\\libmedia_codec\\build\\lib.win-amd64-cpython-310\\', '-DPYTHON_EXECUTABLE=C:\\Users\\ls15\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Ikarus\\Ansteuerung\\000TestGithubSol\\RoboMaster-SDK\\lib\\libmedia_codec\\build\\lib.win-amd64-cpython-310\\', '-A', 'x64']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for libmedia-codec
Failed to build libmedia-codec
ERROR: Could not build wheels for libmedia-codec, which is required to install pyproject.toml-based projects
Any Help would be appreciated!
@steiraa it first glance the error message does not support platform specification, but platform x64
looks like you are trying to specify to build for 64-bit windows, but the cmake file provided by libmedia_codec does not support that plattform
@hildebrandt-carl I would also greatly appreaciate a pull request with your video decoding code. I can merge it after testing, but I have to search for a working Tello battery first, they all seem to die after some weeks of not using/charging.
@hildebrandt-carl How is the progress on the PR?
Hi, sorry just before the holidays I managed to get Covid, and I am not in the office this week. I will be back on Jan 2nd and then get something out.
This seems to be related and might be a problem: https://github.com/dji-sdk/RoboMaster-SDK/pull/49
It appears to me the libmedia_codec
library is not available as a pypi package?
@M4GNV5
You have to install it manually. Copy the whole lib
folder and then run python3 -m pip install .
(see next post)
@Aldarme @IEEERoboticsWSU
I have created a working version on my fork. I have also added an example which displays the video in real time on your screen using matplotlib
. There are a few issues with this though:
ibmedia_codec
which on Ubuntu 22.04 requires additional libraries sudo apt-get install libavcodec-dev libswscale-dev libopus-dev -y
I am not sure what the etiquette is in this case. Do I submit a pull request even though I know about these issues? Or do we wait and resolve these first?
If we wait would you mind taking a look at my fork and seeing if you are able to run it? My fork can be found here.
To get it running clone it. Install everything using:
sudo apt-get install libavcodec-dev libswscale-dev libopus-dev -y
git clone https://github.com/damiafuentes/DJITelloPy.git
cd DJITelloPy
pip install -e .
cd lib/libmedia_codec
pip install -e .
Then go to the examples and run:
python3 display-video.py
Note: The drones motors will go into a low speed mode to cool the drone, but the drone wont take off. A lag free video feed should appear on your screen, and 60 seconds later everything should shutdown.
Tested on jetson nano and I an getting more lag with this Ubuntu 20.04 Python 3.8.10 ARM architecture https://developer.nvidia.com/embedded/jetson-nano-developer-kit Logs: https://hastebin.com/odaxakipij.yaml @hildebrandt-carl
@IEEERoboticsWSU that's very strange. I was running it on X86. However I have some ARM Odroids. I will give it a try on an Odroid this weekend and let you know what I find. However I run my own DJI custom library on the Odriods (which use the same camera code) and they ran fine on that. Is there any chance you have an X86 machine you can try it on?
Also @Aldarme or @M4GNV5 have either of you had a chance to try it?
@IEEERoboticsWSU what was the lag when using the original? When I used the original I used to get roughly every 30th frame.
I do have a x86 machine however this is for a robotics so getting this to work on the jetson is a priority for me.
@IEEERoboticsWSU a thought occurred. What type of Tello are you using? Are you using the Tello or the Tello Edu? When I used the Tello I found there was significantly more lag than the Tello Edu. Right now my results are for a Tello Edu. This weekend I can try both running my solution on Arm, and my solution on both the Tello and Tello Edu.
I have the EDU
also on Ubuntu 22.04 x64 with Python 3.8.16 i have this error [INFO] tello.py - 131 - Tello instance was initialized. Host: '192.168.10.1'. Port: '8889'. [INFO] tello.py - 431 - Send command: 'command' [INFO] tello.py - 455 - Response command: 'ok' [INFO] tello.py - 431 - Send command: 'streamon' [INFO] tello.py - 455 - Response streamon: 'ok' OVER StreamConnection: _recv_task, Start to receiving Data... StreamConnection 0.0.0.0 successfully! _video_decoder_task, started! [INFO] tello.py - 431 - Send command: 'motoron' [INFO] tello.py - 455 - Response motoron: 'ok' Exception in thread Thread-5: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, *self._kwargs) File "display-video.py", line 16, in run self.frame = copy.deepcopy(tello.get_latest_video_frame()) File "/home/neon/.local/lib/python3.8/site-packages/djitellopy/enforce_types.py", line 54, in wrapper return func(args, **kwargs) File "/home/neon/.local/lib/python3.8/site-packages/djitellopy/tello.py", line 414, in get_latest_video_frame frame = self._video_handler.read_video_frame(timeout=3, strategy="newest") File "/home/neon/.local/lib/python3.8/site-packages/djitellopy/video_handler.py", line 69, in read_video_frame return self._video_frame_queue.get(timeout=timeout) File "/usr/lib/python3.8/queue.py", line 178, in get raise Empty _queue.Empty
@M4GNV5 @hildebrandt-carl
I am unable to get video on desktop ubuntu but can get it on the jetson nano with significant lag.
This seems to be related and might be a problem: dji-sdk/RoboMaster-SDK#49 It appears to me the
libmedia_codec
library is not available as a pypi package?
@M4GNV5, libmedia_codec needs to be made available (by DJI) for your specific version of python in pypi, if its not available you can use the code in the PR to compile it yourself
Hi,
I'm using tello TT drone and i'm using "djitellopy2" to perform flight control on it.
However, I'm facing a very low fps rate when trying to just display real time stream frame from the camera drone.
To perform the display I'm using openCV and the exemple code to display stream frames. Using: " from djitellopy import Tello import cv2
myDrone = Tello() myDrone.connect() myDrone.streamon() frameObj = self.myDrone.get_frame_read()
if frameObj.grabbed: cv2.imshow("ARuco augmented", frameObj.frame) cv2.waitKey(1) "
Is there a known issue about this or not ?
Best regards,