autorope / donkeycar

Open source hardware and software platform to build a small scale self driving car.
http://www.donkeycar.com
MIT License
3.17k stars 1.3k forks source link

Lidar support #1028

Open duttasantanuGH opened 2 years ago

duttasantanuGH commented 2 years ago

Planning to add an Lidar (https://docs.donkeycar.com/parts/lidar/) to Jetson Nano based Jetracer Pro (Waveshare - https://www.waveshare.com/jetracer-pro-ai-kit.htm).

Can anyone help us to understand if we need to make any other connection apart from USB power mentioned in the documentation? Also any others tips/advise will be highly appreciated.

Any suggestions on Lidar model etc. will also be highly appreciated. Thanks in advance.

Ezward commented 2 years ago

We have Donkeycar 'parts' that can read data from Slamtech RP lidars. The standard template supports this and will write data to the /data file if USE_LIDAR = True in your myconfig.py file.

There is a newer driver, RPLidar2, that supports choosing where 'zero' angle is and in which direction (clockwise or counter-clockwise) angles increase. That driver is not integrated with any template, so you would need to modify your manage.py to use it if you wanted that functionality. It also has a handy self-test that you can run without the rest of the donkeycar framework to test your lidar; it can be run on your laptop or you RaspberryPi.

Ezward commented 2 years ago

Please let us know how this turns out by adding a comment to this issue. Thanks.

TCIII commented 2 years ago

@duttasantanuGH,

You do realize that there is presently no training models that use RPLidar data for navigation?

Therefore, do you plan to write your own training model and share it with the DC community?

Regards, TCIII

duttasantanuGH commented 2 years ago

@TCIII and @Ezward I was thinking in I install the donkey car from dev branch, I would get a installation which supports lidar (as per https://docs.donkeycar.com/parts/lidar/). The data reading from lidar and feeding lidar data as input to training and then during autopilot mode as well based on training data will be supported. Is that the case or am I missing something? (Do I need to do as suggested here https://www.hackster.io/bluetiger9/stereo-vision-and-lidar-powered-donkey-car-575769)? Will appreciate help here.

TCIII commented 2 years ago

@duttasantanuGH,

Ezward started the integration of the lidar.py part into a DC training model over a year ago and he is still not done.

It is presently on hold.

Regards, TCIII

Ezward commented 2 years ago

@duttasantanuGH There is support for Lidar in that the data is saved. I do not think there is a model that uses it however.

duttasantanuGH commented 2 years ago

I followed the steps as suggested by Ezward. On execution of python donkeycar/parts/lidar.py (intention is only to test the lidar if it is working - connecting and sending the data), I am getting the following error: using donkey v4.3.16 ... future feature annotations is not defined (board.py, line 24) Will appreciate help in resolving the the error.

TCIII commented 2 years ago

@duttasantanuGH, What branch of DC are you using? The main branch or next-gen-lidar-parts branch? There is no board.py part in either the main branch or the next-gen-lidar-parts branch.

I assume that you are following these instructions to install and use your Lidar with DC?

Could you please copy and paste the CLI output of manage.py drive where you get the error to help facilitate troubleshooting your issue.

TCIII

duttasantanuGH commented 2 years ago

Yes I followed the instructions. I have used main branch. I saw that dev branch is not there. Do you want me to try next-gen-lidar-parts branch (I faced other issues when I tried out next-gen-lidar-parts branch. But I can try again if needed) Here is output requested: using donkey v4.3.16 ... WARNING:donkeycar.parts.pins:pigpio was not imported. INFO:donkeycar.config:loading config file: /home/jetson/mycar/config.py INFO:donkeycar.config:loading personal config over-rides from myconfig.py INFO:main:PID: 7804 cfg.CAMERA_TYPE CSIC INFO:main:cfg.CAMERA_TYPE CSIC GST_ARGUS: Creating output stream CONSUMER: Waiting until producer is connected... GST_ARGUS: Available Sensor modes : GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings: Camera index = 0 Camera mode = 5 Output Stream W = 1280 H = 720 seconds to Run = 0 Frame Rate = 120.000005 GST_ARGUS: Setup Complete, Starting captures for 0 seconds GST_ARGUS: Starting repeat capture requests. CONSUMER: Producer has connected; continuing. [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 INFO:donkeycar.parts.camera:CSICamera opened... INFO:donkeycar.parts.camera:...warming camera INFO:donkeycar.parts.camera:CSICamera ready. INFO:donkeycar.vehicle:Adding part CSICamera. adding RP lidar part Traceback (most recent call last): File "manage.py", line 853, in meta=args['--meta']) File "manage.py", line 187, in drive lidar = RPLidar(lower_limit = cfg.LIDAR_LOWER_LIMIT, upper_limit = cfg.LIDAR_UPPER_LIMIT) File "/home/jetson/projects/donkeycar/donkeycar/parts/lidar.py", line 272, in init from adafruit_rplidar import RPLidar File "/home/jetson/env/lib/python3.6/site-packages/adafruit_rplidar.py", line 38, in from busio import UART File "/home/jetson/env/lib/python3.6/site-packages/busio.py", line 16, in import adafruit_platformdetect.constants.boards as ap_board File "/home/jetson/env/lib/python3.6/site-packages/adafruit_platformdetect/init.py", line 10, in from .board import Board File "/home/jetson/env/lib/python3.6/site-packages/adafruit_platformdetect/board.py", line 24 from future import annotations ^ SyntaxError: future feature annotations is not defined GST_ARGUS: Cleaning up CONSUMER: Done Success GST_ARGUS: Done Success

TCIII commented 2 years ago

@duttasantanuGH,

Unfortunately I never used the Lidar in the main branch and only worked with the next-gen-lidar-parts branch.

I have an A1M8 running successfully on a NVIDIA Nano 4GB running JP4.6 using the next-gen-lidar-parts branch.

Did you setup your Nano 4GB according to these instructions?

It looks like Adafruit has made changes to Adafruit_CircuitPython_RPLIDAR that make it incompatible with the JP4.6 Python 3.6.9

Apparently annotations is only available from Python3.7 up and you are running 3.6.9.

Adafruit-blinka is now requiring Python 3.7. This library should also be requiring 3.7.

What issues did you experience with the next-gen-lidar-parts branch?

TCIII

duttasantanuGH commented 2 years ago

@TCIII I used the next-gen-lidar-parts branch now. Fortunately no error this time in installing from next-gen-lidar-parts branch. But I am getting the same error for lidar. Error: Traceback (most recent call last): File "manage.py", line 853, in meta=args['--meta']) File "manage.py", line 187, in drive lidar = RPLidar(lower_limit = cfg.LIDAR_LOWER_LIMIT, upper_limit = cfg.LIDAR_UPPER_LIMIT) File "/home/jetson/projects/donkeycar/donkeycar/parts/lidar.py", line 272, in init from adafruit_rplidar import RPLidar File "/home/jetson/env/lib/python3.6/site-packages/adafruit_rplidar.py", line 38, in from busio import UART File "/home/jetson/env/lib/python3.6/site-packages/busio.py", line 16, in import adafruit_platformdetect.constants.boards as ap_board File "/home/jetson/env/lib/python3.6/site-packages/adafruit_platformdetect/init.py", line 10, in from .board import Board File "/home/jetson/env/lib/python3.6/site-packages/adafruit_platformdetect/board.py", line 24 from future import annotations ^ SyntaxError: future feature annotations is not defined GST_ARGUS: Cleaning up CONSUMER: Done Success GST_ARGUS: Done Success

Taking your que, I have installed pip install adafruit-circuitpython-rplidar==1.2.1 version. The annotation error is gone but facing a new error: ERROR:donkeycar.parts.lidar:No RPLidar connected Traceback (most recent call last): File "manage.py", line 853, in meta=args['--meta']) File "manage.py", line 187, in drive lidar = RPLidar(lower_limit = cfg.LIDAR_LOWER_LIMIT, upper_limit = cfg.LIDAR_UPPER_LIMIT) File "/home/jetson/projects/donkeycar/donkeycar/parts/lidar.py", line 301, in init raise RuntimeError("No RPLidar connected") RuntimeError: No RPLidar connected GST_ARGUS: Cleaning up CONSUMER: Done Success GST_ARGUS: Done Success

Which version of adafruit-circuitpython-rplidar are you using? Few more information about my set up: I started with jetpack 4.5 and RPlidar is A1M8-R6. Without the lidar all works fine.

Ezward commented 2 years ago

@duttasantanuGH Sounds like you may be using Python 3.6? I would update to python 3.8. We have started adding type annotations in the code and they require a newer version of python.

TCIII commented 2 years ago

@duttasantanuGH,

Do you have pip3 install glob2 installed?

TCIII

duttasantanuGH commented 2 years ago

@TCIII - Yes I have installed glob2. @Ezward - It would help. Kindly let me know once you update.

TCIII commented 2 years ago

@duttasantanuGH,

Do you have user write permissions set for the NVIDIA USB ports?

TCIII

duttasantanuGH commented 2 years ago

@TCIII - Added it now. It is working for "python donkeycar/parts/lidar.py". Thanks for kind support. But crashing after few seconds with the following message: using donkey v4.3.14 ... Gtk-Message: 00:23:14.094: Failed to load module "canberra-gtk-module"

TCIII commented 2 years ago

@duttasantanuGH, It appears that the "canberra-gtk-module" has to do with firefox which I believe the JP4.6 Ubuntu 18.04 uses?

Are you running manage.py drive over SSH or are you trying to use the lidar.py built-in display (if __name__ == "__main__":) which requires that you run from the Ubuntu desktop to work?

You could try sudo apt-get install libcanberra-gtk-module, but I don't know why you are getting this error as it is firefox related.

TCIII

duttasantanuGH commented 2 years ago

@TCIII and @Ezward - Thanks. I am using a headless version and doing a ssh -Y for getting the display 2 issues I am facing now: Issue 1: I had installed libcanberra-gtk-module and now the error is resolved but when I run python donkeycar/parts/lidar.py, the "lidar on jetson-desktop" is appearing for a brief spell but it crashing in few seconds. Issue 2: When I am running manage.py drive, the car is getting initiated to drive but when I trying to drive with joystick, it is crashing with the following Warning message and more importantly the following error message: _Warning message - /home/jetson/projects/donkeycar/donkeycar/parts/lidar.py:323: DeprecationWarning: elementwise comparison failed; this will raise an error in the future. if anglesind != []: Error message Traceback (most recent call last): File "/home/jetson/projects/donkeycar/donkeycar/vehicle.py", line 154, in start self.update_parts() File "/home/jetson/projects/donkeycar/donkeycar/vehicle.py", line 202, in update_parts outputs = p.run(inputs) File "/home/jetson/projects/donkeycar/donkeycar/parts/keras.py", line 109, in run np_other_array = np.array(other_arr) if other_arr else None ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() INFO:donkeycar.vehicle:Shutting down vehicle and its parts... INFO:donkeycar.parts.camera:Stopping CSICamera INFO:donkeycar.parts.controller:button: a_button state: 0 GST_ARGUS: Cleaning up CONSUMER: Done Success GST_ARGUS: Done Success Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(self._args, **self._kwargs) File "/home/jetson/projects/donkeycar/donkeycar/parts/lidar.py", line 307, in update for scan in scans: File "/home/jetson/env/lib/python3.6/site-packages/adafruit_rplidar.py", line 508, in iter_scans for new_scan, quality, angle, distance in iterator: File "/home/jetson/env/lib/python3.6/site-packages/adafruit_rplidar.py", line 433, in iter_measurements raw = self._read_response(dsize) File "/home/jetson/env/lib/python3.6/site-packages/adafruit_rplidar.py", line 258, in _read_response data = self._serial_port.read(dsize) File "/home/jetson/env/lib/python3.6/site-packages/serial/serialposix.py", line 575, in read buf = os.read(self.fd, size - len(read)) TypeError: an integer is required (got type NoneType)

TCIII commented 2 years ago

@duttasantanuGH,

I suggest that you run the lidar.py main from the desktop and see if that is functional and then move on to trying to run headless.

TCIII

duttasantanuGH commented 2 years ago

@TCIII Thanks for your suggestion. I was a good idea to try out in host ubuntu machine. I could figure out the stupid error (read laziness) on my side for not looking at default arguments ("number" and line 881 of lidar.py (while scan_count < args.number). This was making the lidar run last for few seconds. Now that part is solved I could now connect Lidar with Jetson nano and get the lidar data as part data collection. I understand from the earlier comment that there is no training model available at present to consume lidar data and subsequently help in navigation.

duttasantanuGH commented 2 years ago

@Ezward & @TCIII Please note the following data that I am getting as part of "catalog_0.catalog" file: For image with clear pathway for car: {"_index": 4, "_session_id": "22-07-25_0", "_timestamp_ms": 1658771315188, "cam/image_array": "4_cam_imagearray.jpg", "lidar/dist_array": [3, 2, 1, 0], "user/angle": 0.0, "user/mode": "user", "user/throttle": 0.033509323404644915} For image with obstruction in front of the car: {"_index": 4, "_session_id": "22-07-25_0", "_timestamp_ms": 1658771370749, "cam/image_array": "4_cam_imagearray.jpg", "lidar/dist_array": [29, 30, 15, 16, 17, 14, 18, 13, 12, 19, 20, 21, 11, 10, 22, 9, 23, 8, 24, 7, 25, 6, 26, 28, 5, 27, 4, 3, 2, 1, 0, 31, 33, 32], "user/angle": 0.0, "user/mode": "user", "user/throttle": 0.25}

Few questions - if you can kindly help me with responses:

  1. Can you please throw some light why the lidar/dist array are of varying length in two cases?
  2. Any thoughts on how to interpret this data in terms of distance?
  3. Any further thoughts how this lidar data into a model as these are of varying length?

Will apreciate very much your response on this.

Ezward commented 2 years ago

@duttasantanuGH sorry for the long delay. We respond very quickly in the discord community, but I do like the detail that is captured here in the issue. So perhaps must ping me in the community when you add something to the issue. I am Ezward in the discord.

Now in terms of the values; the code says they are distances. I think there is supposed to be one per degree of rotation, starting at the minimum angle you have set in configuration, up to the max. I don't know why the arrays can be of different length. Perhaps @zlite knows; he knows the RPLidar driver code better than I.