NVIDIA-AI-IOT / jetbot

An educational AI robot based on NVIDIA Jetson Nano.
MIT License
3k stars 1.02k forks source link

How to combine Road Following and Collision avoidance? #267

Closed sakae-og closed 2 years ago

sakae-og commented 4 years ago

Hi All. I have recently got a Jetbot and am running a demo. Road Following and Collision avoidance can be operated separately.

I want to make a combination that stops when there is an object during Road Following. It doesn't work even if the two codes are combined, so I would like to know how you can do it.

I'm not good at English. I'm sorry if it's hard to understand.

tomMEM commented 4 years ago

Hello, you would need a road following code that is actually based on single object recognition, like lanes, signs, or other objects.

You can have a look at the jetracer in NVIDA-AI-IOT/jetracer, it allows to have several categories for training.

I added the jetbot control to jetracer script "interactive_regression_datacollection", it allows to control the bot with joystick and collect data by the display clickable widget. Script is at my Jetbot-Project Repository. It requires installation of Jetcam and Clickable Image Widget.. I added to the scipts the live run using category roadfollowing with target display using the Jetbot roadfollowing scripts, it seems to have less time lag. The scripts can be used for both models, with and without category.

With the jetbot scripts based on classification of images and interference, you could try to train collision avoidance with a single object, like water bottle (from many different directions as blocked) and all other images like street, strips, etc. as free.

Using the Trt version of the collision and road following models it might work a bit, but the bot might not move fast.

If you put your script in your forked repository, then it would be possible to work on it a bit too.

Best T

sakae-og commented 3 years ago

Hi @tomMEM

Thank you for your answer. I was shown the script and tried various things, but I got an error on the way.

Is it possible to control without a joystick? I can't install the Clickable Image Widget, what should I do?

I'm still new in machine learning, jetson nano, and git(github). I don't know how to deal with them. How can I achieve my goals?

tomMEM commented 3 years ago

Hello sakae-og, could add keyboard button control instead. But for now, just do not activate the controller cell, or just ignore the error message, it is not required for data accumulation, just a convince for avoiding the squats.

The Clickable Image Widget, however, you need, it could be installed following the https://github.com/jaybdub/jupyter_clickable_image_widget/issues/3 description by user Lunran, see below.

It worked at jetbot_image_v0p4p0 SD-image, without updating Jupyter etc.. In case your tried already, just delete the local jupyter_clickable_image_widget folder and start again.

However, my a second time installation stuck at sudo pip3 install -e . , so I used python3 setup.py build , it takes a long time, followed by sudo pip3 install -e . (#the dot is important).

cd #go home directory sudo apt-get install nodejs-dev node-gyp libssl1.0-dev sudo apt-get install npm #likely already installed, but it does no harm git clone https://github.com/jaybdub/jupyter_clickable_image_widget cd jupyter_clickable_image_widget git checkout no_typescript

sudo python3 setup.py build #TB modified, takes a long time >30 min

sudo pip3 install -e . sudo jupyter labextension install js sudo jupyter lab build

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM ,

I am looking to accomplish the same task as @sakae-og. When looking at your "CategoryRoad_Jetracer_2_Jetbot" on Github in the interactive_regression notebook, I see that you used 2 categories (Apex and Bottle), I am not sure what exactly you would like to accomplish but I would like my Jetbot to stop when seeing the "bottle" in the road and proceed when the bottle is removed, is that what your script is meant to accomplish? I apologize for my limited script understanding as I am just a beginner. Also I have tried using just Apex and have gotten good results but have never really figured out how to use several categories.

P.S. I would also like to also try the second point you mentioned using the Trt version of both collision and road following models on Jetbot using a single object as blocked.

tomMEM commented 3 years ago

Hello Abuel, thank you for your interest. I added a behaviour (back drive) to the second category (bottle), just for demonstration in a new file there (live_demo_trt_jetracer_categoryModel_for_jetbot_with_stop_and_timeseries_Display). It does not depend from the chosen category. A problem is to keep the camera stream updating while having some new actions, and to reduce false positive detections. In addition, normally the scores could be used (0-1), here for example the x values. However, for now, just the prediction.value is used. Since it is probability, the second category needs to be well trained to avoid to many false positive stops. The jetson-2-jetbot training display "live" helps to achieve it. Actually the original jetson script is based on jetson-dlinano tutorials showing the use of different categories in greater detail.

2) The object location and collision work together, however, for the road following some speed is required, but could try to add.

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

Thanks allot for the scripts will definitely try it out, as for now, using the original Nvidia Jetbot codes I have trained my collision avoidance model on my road following street, setting lanes, strips etc, to be 'free' and a bottle (trained on various directions) set as 'blocked'. I would like to also try and combine these scripts ("road following live demo" and "collision avoidance live demo" and see if the robot is able to perform both at the same time (even if lower speed is required). I know that the system architecture for a system that uses multiple models will generally need to be different, is there anybody who has already done this and combined the two before?

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

I have just completed the initial setup (Installed Jetcam and Clickable Image Widgets) and was able to run the script "interactive_regression_category_datacollection" successfully and with no errors. Thanks!

While collecting training data however I am still a little confused. As for category 'apex' it is quite clear to perform labelling on the path which we would like the Jetbot to follow, however, for the second category 'bottle' what exactly should I set to be the target?

P.S my end result is to create road following model in which the Jetbot would follow a certain path and would come to a halt whenever the second category "bottle" is placed in the front of it.

tomMEM commented 3 years ago

Hello Abuel, I combined the original Jetbot anti-collision and road following TRTs scripts into one script - trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb. It works actually with sufficient speed and similar time lag as road following alone.

It requires well trained two models. The object might need NOT to be trained on the road and with different backgrounds etc..

The probability threshold (0-1) can be adjusted with one of the sliders (start with 0.8 or 0.9) (it gives less false positive, but more false negative) The bot stops for a couple of frames (slider Manu. time stop) - other behaviors could be added at the main last cell.

The category version works actually in a similar way.

Hope it runs. Best T

tomMEM commented 3 years ago

Hello Abuel, thank you for testing. The script from jetbot using the two models seemed to work as hoped in my brief testing - road and stop. I just uploaded it.

In the category script extraction of classification probability is missing, or at least is not very powerful. Since we have four prediction values, so it would need some sum or averaging to use the probability values to set a prediction threshold.

The object learning is most likely similar to your trials with Jetbot collision learning, isolated object on different background, but also street. Hope it works out. Best, T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

I just tested your script "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" with my Jetbot and it works just fine. Very much appreciated. Hope to do the same experiment with my fast Jetracer.

tomMEM commented 3 years ago

Hello Abuel, good that it works a bit. I uploaded a modified version of Jetson-2-jetbot _ollowing with category (trt_jetracer_categoryModel_for_jetbot_with_stop). The controls are the same. I am not sure that I used the correct probabilities or whether the trained model is not sufficient. At least the network cannot get very high predictions for the object. Might be it works in your hands. All the best T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

Nice, I would love to try it out, however, i'm still confused about how to train multiple categories (Apex: Regression and Bottle: Classification) using the "Interactive_regression_category_datacollection".

sakae-og commented 3 years ago

Hi @tomMEM

I'm sorry for my late response.

As Abuel heard, I tried a lot. First, 「interactive_regression_category_datacollection_jetracerforjetbot_joystick」 was executed. Then, in the TASK part called「No module named 'xy_dataset' 」 When I try to install xy_dataset with pip, it say 「No matching~」.

Why doesn't it work??

I'm sorry for the really basic things.

abuelgasimsaadeldin commented 3 years ago

Hi @sakae-og,

xy_dataset is a python script and needs to be uploaded into the working directory in order to run the "Task" part. It can be found at the Jetracer notebooks directory (https://github.com/NVIDIA-AI-IOT/jetracer/tree/master/notebooks) together with "utils.py".

tomMEM commented 3 years ago

Hello Abuel, the data collection for road is similar to Jetbot with the difference to take more care where to place the spot, collect in database A a good number of images. It can be trained and tested with live, and more images could be added. The model could be already used to run the bot. In case of second category, a new category needs to be chosen and again images need to be collected in A by pointing to a similar spot on the object - again a good number of images with slight variations needs to be collected (the object should remain as whole in the field of view). Also best is if the object covers a large part of the field of view for the “road stop” approach. However, the main problem is the probability score in the live-run. The probability is for all categories at the same time (sum=1), so we cannot apply a simple threshold (however, at moment it is). The model always detects "road" and "bottle" to a certain degree by chance, and give probability back. Only if “road” is covered by “object” then a nice probability for the object (0.7) might be obtained so far. I am still looking if there are some ways around to extract probabilities for a minimum number of adjacent pixels.

tomMEM commented 3 years ago

Hello Sakae-og, thank u for the feedback. If only road stop (collision avoidance) while running the road following is the goal, then please just use last script from my jetbot-project (trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb) which is using two models: a) the one from jetbot collision avoidance and converted to TRT -- best_model_trt.pth b) the one form jetbot road following and converted to TRT -- best_steering_model_xy_trt.pth Both models need to be placed in e.g. RoadFollowing directory of the Jetbot repository where also the "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" has to be placed.

Check the names of your models in one of the first cells and give it a run.

Sorry for the confusion with Jetracer scripts, which require an additional clone of the Jetracer repository and installation, as well as installation of Jetcam and Clickable etc.

Hope it works out. T

sakae-og commented 3 years ago

Hi Abuel. Thank you for teaching me. I was able to do it without any problems!

But next ----> 9 left_link = traitlets.dlink ((controller.axes [1],'value'), (free_left,'value'), transform = lambda x: -x) IndexError: tuple index out of range

Hi @ tomMEN Thank you for your reply. trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb I also tried running. Where do you create ‘best_Steering_model_xy_trt.pth’ and ‘best_model_trt.pth’ to do this?

I thought I would create it with data_collection ~, but is it different?

tomMEM commented 3 years ago

Hello sakae-og, the models best_Steering_model_xy_trt.pth’ and ‘best_model_trt.pth’ have to be created with the Jetbot scripts - a) jetbot/collision_avoidance: run data collection, train with train_model_resnet18.ipynb, and build with live_demo_resnet18_build_trt.ipynb the model best_model_resnet18.pth (before was just best_model_trt.pth, the live demo has wrong model name) b) jetbot/roadfollowing: data_collection.ipynb, then train_model.ipynb, live_demo_build_trt.ipynb Then place the collision_avoidance model to jetbot/roadfollowing and also the "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb"

The jetracer-2-Jetbot project is based on one database that is created by interactive_regression.ipynb of the jetracer/roadfollowing repository (or the one I modified a bit for use with Joystick, but you can disable by place # in front of line). Thus, you need to clone jetracer and place scripts there. If no Joystick, then just use the jetracer/roadfollowing "interactive_regression.ipynb".

Hope it works, best T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

I have just tried running the the live demo script "trt_jetracer_category_Model_for_jetbot_with_stop.ipynb", I have also noticed similar problems with the prediction, when the robot is completely free (no "bottle" on the track) the prediction shows a value of around 0.49 and when the "bottle" is placed on the road the prediction goes up to around 0.6 max, so I tried setting the threshold to like 0.5 and as assumed there were allot of false positives.

Also for the "category" part in the live demo, I did not fully understand it but it shows a value of '3' when the robot is fully free and '1' or '2' when the robot is being blocked.

tomMEM commented 3 years ago

Hello Abuel, good that you got it running. Yes, that are the current problems. The number 0 to 3 supposed to present the two categories with each having x and y. So we have four classes with the numbers 0,1,2,3, but I am not sure how they are sorted in relation to the two categories.

1) For probability scores I used the order 0,1 for first category, and 2,3 for second. You could try 1,2 , just replace indices = [2, 3] to indices = [1, 2]

Also 1) You could experiment with the script and replace at the last cell with "if prob_blocked > block_threshold.value" to "if 1<=prob_blocked <= 2" and add after "prediction_widget.value = category_number" the "prob_blocked=prediction_widget.value". Then the script is not using scores but the prediction of the class (two categories each with x y).

Hope it helps a bit, still need to find out how to get probability scores just for one class out of inference model without normalization to all. Best T

tomMEM commented 3 years ago

Hello, looks like there is no way to get a likelihood measure of the inference strength per category using the current TRT model. The four scores in the current approach are not useful. So it would seem that an approach with two models (classification (object recognition)/inference and inference) with one video stream or two streams are required. With a Jetson Xavier it would be likely possible at sufficient speed. Still, we could add the collision_avoidance model to jetracer if you think is useful. Best

T

tomMEM commented 3 years ago

Hello Abuel, as an exercise I added the Jetbot collision_avoidance model to the Jetracer script and removed the dependencies on categories, - uploaded as "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb".

Still categories could be used to switch from road following model to object following mode.

Please note I tested it with the old Jetbot collision_avoidance "best_model_trt.pth', and not with the newer "best_model_resnet18.pth".

Hope it works. T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

Thank you, will definitely try out your suggestions as well as the new script with the collision avoidance model this coming Monday.

Actually what I would like to really accomplish since the road following and collision avoidance worked really well combining the 2 models is I would like to add more categories to the collision for example if a "traffic light" is detected then stop for 5 seconds. So if "bottle" is detected then stop until "bottle" is removed, if "traffic light" is detected then stop for 5 seconds and proceed. May I ask how you would accomplish such a task?

Secondly, for object following I also had a look at your script "live_demo-object following_tweak.ipynb" and after playing around with the motor adjustments etc. the object following worked amazingly well. I appreciate your works. However, for the object following I would assume that once the robot reached the "target" object it would stop however it would just end up going into the object. So I would like to ask how would you calculate the distance of a "target" object without using external sensors (by using bounding boxes for example) so that the robot can "stop" after reaching the object.

Thank you Best, Abuel

tomMEM commented 3 years ago

Hello Abuel, thank you for your feedback and testing. 2) I added a stop when object has been reached (live_demo-objectfollowing_tweak_object_stop.ipynb). The threshold for stop is in one of the slider "object_stop_threshold". It takes the "foot" of the object and depends from the angle of your camera (1 bottom to 0 sky). Stop time can be adjusted with other slider to the right. The collision_avoidance threshold can be adjusted beside (start with 0.9). TRT_object_following_tweak_object_stop.ipynb is based on TRT best_model_trt.pth. The time lag is > a second and FPS 10-16. Using triangulation would require knowlege of the object size and camera calibration (angle, etc.).

1a) " road following and collision avoidance worked" did you mean that the category script worked a bit? Actually, you could have a high number of categories, if you do not need to predict the x y coordinates to turn towards the category/object. Just train one model for categories without x and y that can be used for inference (probability of detection around 60%), and the other one for the road following with x and y (you have already).

1b) if the environment is fixed, then classical OpenCV object recognition (threshold, edge, etc.) could be used. If models, then there might be a speed problem, however there are specialized databases or pre-trained models for traffic based on e.g. German Traffic Sign Recognition Benchmark (GTSRB) dataset or models like TrafficCamNet. A simplified dataset could be trained using the ResNet18 - might be interesting task.

Best, T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

Sorry for the late reply, thank you for the "live_demo-objectfollowing_tweak_object_stop.ipynb" scripts I haven't yet tried it out but definitely will! Thanks.

As for the "road following and collision avoidance worked" I meant using the "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" as well as the "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb" (I have just recently tried it out and it also worked amazingly well however I felt that using the Jetbot RoadFollowing the results are slightly better, but it could just be my training dataset.

For the "Traffic light" and "bottle bottle" experiment, I would love to just be able to add several categories to the "Interactive Regression script", one category being for apex (regression) and the other two for classification (without x&y), however in the interactive regression, I see that there is only regression (x&y) option when training. So the only way I see it possible to accomplish this task is to train a model for the custom "Traffic light" and possibly add it to the script with the road following and bottle already (trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb). Or is there another way of accomplishing this task by just modifying ready Jetbot or Jetracer scripts?

tomMEM commented 3 years ago

Hello Abuel, thanks for your feedback and interest. It looks like that the category model is a a few frames slower, might be because of the two categories or something in the loop. Using in the jetracer-2-jetbot script the jetbot model for road following, seems to be better.

Traffic light: We could add categories to the collision_avoidance model - only requires few changes and add directly to "category bottle" or use it as a third model. It looks like it is better to work with "specialized" models per task. The third model could then use every other frame for check. Will try these days.

However, it will detect and respond to objects early - so I guess you could only try then if it remains useful for your task.

If the ssd mobilenet is detecting your traffic light as an object, then this would be a more "future-like" solution - however, because of time lag (about 1-2s) it prevents a good performance on road following. All the best T

tomMEM commented 3 years ago

Hello Abuel, I added a new folder "Classification_Stop" with 1_datacollection, 2_TRT and 3_live run.

It is using four classes (more classes could be added). The background is most important (trash everything in which should not give a stop signal, actually it will but we are not going to use it). If backgrounds are very different, then a second background class could be added. The network cannot say “I do not know”, so it will always chose a class, to avoid signals in essential classes, non-essential objects need to be in background The other classes need to be carefully trained with the corresponding object (with different backgrounds) and in a distance and image position you like to have it detected. Please check the readme. So far no behavior change according classes has been implemented ( but some slides control time, turn angle, and speed).

The road following got a bit a time lag, so hope it will still run sufficiently. If it works a bit then some behavior could be added. Best, T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM,

That is definitely something to try out, however I do have a few questions, firstly I see that there are 4 categories ('background1', 'redlight', 'greenlight' and 'bottle') but there is no 'Apex' (Road Following), are you trying to create 2 models separately, one for these object detections and behavior and another road following separately and then combine the 2?

As for the experiment I wanted to accomplish at first, I would say it is a lot less complex than what you are trying to accomplish here with the Classification_Stop. I wanted to start with something more simple using just 3 categories ('Apex', 'Bottle', 'Trafficlight') here the traffic light is more like a lego for my lego track and if the 'Trafficlight' is detected the Jetbot will just pause for 5seconds and continue on with the Road Following, however, if 'Bottle' is detected then the Jetbot will stop until the 'Bottle' is removed.

Nevertheless, I really look forward to trying and testing out your 'Classification_Stop' and see what the results are. Thanks!

tomMEM commented 3 years ago

Hello Abuel, thanks for checking it out. Now we have too many versions of the scripts, so it might be a bit tricky. But to distinguish different objects/situations we need a classification/inference model. The collision_avoidance model is one. Now it is extended to four categories (names and number can be changed, but needs to be consistent in 1, 2, 3 script) . If you like to have only three then please remove one from the list or leave it empty. The background is "free" and bottle is bottle. Once the 1_classification... script created the folders you could copy the images from your former trainings into it. In addition, you would need train for the lego. Once you think the classification is recognizing your objects then the other scipts can be utilized.

The road following is using the Jetbot road following script - but you could replace it with two categories Jetracer model [apex, bottle], but the category bottle will be not used. I could not find out how to use the regression model from Jetracer to get classification/inference confidence scores, thus, I used two specialist models. I think it might just work for your task. Best. T

abuelgasimsaadeldin commented 3 years ago

Hi @tomMEM, thank you for the very well explanation. I would love to try the experiment out now as soon as I can and would surely update you. Thanks again!

tomMEM commented 3 years ago

Hello @abuelgasimsaadeldin and @sakae-og , I removed sliders etc. to increase road following speed in trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb and 3_roadfollowing_classification_behavior.ipynb. Also the last scripts do not need Jetcam and clickable. Still there is a build up of memory over hours - so you need to "kill" your explorer in time and restart. The last script allows to have three categories that can be adjusted regarding threshold and time of pause (manually in a list at the last major cell). The time lag is now like in the road following alone. In my brief tests it worked but needs of course training and adjustment for specific environments. Best. T

sakae-og commented 3 years ago

Hi @tomMEM Thank you as always.

I ran "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb" using the file created by jetbot.

Then I got a key error.

KeyError Traceback (most recent call last)

in 4 5 model_trt = TRTModule() ----> 6 model_trt.load_state_dict(torch.load('best_steering_model_xy.pth')) # well trained road follwoing model 7 8 model_trt_collision = TRTModule()

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 822 load(child, prefix + name + '.') 823 --> 824 load(self) 825 load = None # break load->load reference cycle 826

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load(module, prefix) 817 local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {}) 818 module._load_from_state_dict( --> 819 state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) 820 for name, child in module._modules.items(): 821 if child is not None:

/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py in _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) 303 304 def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs): --> 305 engine_bytes = state_dict[prefix + 'engine'] 306 307 with trt.Logger() as logger, trt.Runtime(logger) as runtime:

KeyError: 'engine'

What is the cause of this?

tomMEM commented 3 years ago

Hello @sakae-og, keyError can be happen when the model is corrupted either during copy (have to wait long enough to finish) or during saving while or after training. Please just try a fresh model (copy or after training), the model in the TRT build needs to be saved manually. Your current model might also not work in the original jetbot roadfollowing script if you would copy back to jetbot. Hope it works out. Best. T

sakae-og commented 3 years ago

Hello @tomMEM Thank you for your reply. I reinserted the file, but I got a keyEerror again.

Where do you create the 「best_steering_model_xy_trt.pth」 and 「best_model_trt.pth」 in the first place?

I shoot with「data_collection」in 「collision_avoidance」and 「road_Following」of NVIDIA-AI-IOT and train with「train_model」. And I am renaming the resulting file.

Is the method different in the first place?

tomMEM commented 3 years ago

Hello @sakae-og, I wrote a bit more in the readme at point 4. But it looks for me that you do not use the conversation step of the models to TRT. After train you need to build the TRT models.
In case your repositories of NVIDIA-AI-IOT/jetbot or jetracer are too old and do not contain the “build TRT scripts”, then you need to update using cd jetbot, git pull origin master followed by “sudo python3 setup.py”. Same for jetracer (cd .., cd jetracer etc.) and also install Torch2TRT as described in "live_demo_resnet18_build_trt.ipynb" from the original jetbot.

So for jetbot collision_avoidance model the steps are: a) data_collection, b) train with train_model_resnet18.ipynb, c) build with live_demo_resnet18_build_trt.ipynb (all scripts in original jetbot collision_avoidance folder), d) and finally add the TRT model from c to one of our roadfollowing with collision_avoidance folder and right name to the script. For the steering you could use jetbot roadfollowing: a) data_collection.ipynb, b) train_model.ipynb , c) to convert to TRT live_demo_build_trt.ipynb, d) add to our road following the model and the name to the script. So I hope I did not forget a step. The same sequence would be necessary for "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" or just use the models after building the TRT models. Wish success. Best. T

sakae-og commented 3 years ago

Hi @tomMEM Thank you! I was able to start it safely!

However, "live_demo" of "road_following" works, but jetbot spins around with this program.

But I'm glad the program worked! Thank you very much !!!!

I will touch it a little more and try it.

tomMEM commented 3 years ago

Hello @sakae-og, thank you for your feedback. 1) If you have Jetbot TRT models for collision_avoidance and roadfollowing, then please use the last from 10/08 "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb". It stops when e.g. a cup is in the way (if trained) and has less settings to care about and is fast. Just copy the mentioned script to the folder of the two models.

2) If you like to test the jetracer category model for roadfollowing with collision, then please try "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb" However, the categories are not much of use so far, but collision avoidance is working. However it has more settings for stop/pause behavior: just start with setting (under threshold/block) everything to zero (angle, speed, with sliders) otherwise it will spin around.

3) The classification based collision avoidance and roadfollowing script requires more training etc., but it allows to stop the jetbot accordingly to object (can be more than 3 different objects). It is quite interesting actually, but needs adjustment of setting inside the script. I recommend to start with the script mentioned at 1.

Best, T

sakae-og commented 3 years ago

Hi @tomMEM

I was able to perform line tracing and stopping using "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb".

I am very happy to be able to do what I want to do. Thank you very much.

Best regards, Yuu

tomMEM commented 3 years ago

Hello @sakae-og, I’m glad the script helped you to achieve some of your goals, and thank you for your feedback. In case you like to try the extended version (3_roadfollowing_classification_behavior), then you could use the same dataset (free, blocked) from collision avoidance training and copy to the respective folders created with 1datacollection... and a set of CATEGORIES = ['free','blocked','box','cup'] (in the second cell), followed by collection of box and cup images. After training and TRT build, the category list in 3_roadfollowing… should be changed accordingly to e.g. “CATEGORIES_classification = ['free','blocked','box','cup']”. Have good fun, best. T

weweew1 commented 3 years ago

hi @tomMEM ,what compiler are you used to build TRT model. Because I try to use the colab to build TRT model,but an error occurred,when I execute "sudo python3 setup.py install",this code The error is "python3: can't open file 'setup.py': [Errno 2] No such file or directory" Please help me to solve this problem thk!

tomMEM commented 3 years ago

Hello @weweew1, the build of the TRT model is based on the original jetbot build script (live_demo_resnet18_build_trt.ipynb). It requires additional installations as described inside that build script. The transformation of the trained ResNet model to TRT takes less than 5 min on the jetson nano.

a) I did not try myself, but Colab itself has some pre-installations, but those might be to new or to old. If you like to run the training script there you need to install as described in jetbot SD from scratch (start from point 4, but do install 8-to end), however making sure you indicate every time the same version as at the jetson nano (e.g. pip install numpy==???) . A list of the installed library versions at the jetson nano you might get with !pip freeze and !pip list or other commands. Unfortunately there seems to be no "requirements" file in jetbot, so you need to sync yourself. You could try to install the libraries at least for the dependencies that are required for the training script (train_model.ipynb, see in notebook cell 1 import ...).

b) To run "setup.py" might be actually not necessary for training after cloning jetbot. c) For just transformation to TRT you would need to clone "torch2trt", check if the torch, torchversion and cuda are available and whether they have the right version (same as on jetson nano). Finally make sure GPU is activated under Menu/Runtime/Change runtime. Would be great to learn if it works out for you. Best. T

weweew1 commented 3 years ago

hi @tomMEM How to transform the trained ResNet model to TRT on the jetson nano

tomMEM commented 3 years ago

Hello @weweew1, you need to find the scripts with the word "build" in it, for example "live_demo_resnet18_build_trt.ipynb" and open in your jupyter lab, as well check the name and path in the corresponding cell for your ResNet model (e.g. model.load_state_dict(torch.load('best_model_resnet18.pth')) and rename if needed. Finally run cell-by-cell.

The corresponding ResNet model (e.g. 'best_model_resnet18.pth') need to be placed and present in the same folder as the respective build script e.g. in the "collision_avoidance" folder.

If you like to compare two models you need to repeat the process for the "road_following" folder (live_demo_build_trt.ipynb), or for the "classification_Stop_roadfollowing" folder (2_load_build_2_TRT_classification_model.ipynb) with the corresponding models. Of course, it will only work in case you have the corresponding jetbot installation based on an jetbot SD image (4.3), .

Also you need to make sure that you have "torch2trt" previously installed: cd $HOME git clone https://github.com/NVIDIA-AI-IOT/torch2trt cd torch2trt sudo python3 setup.py install

Best. T

weweew1 commented 3 years ago

Hi @tomMEM thk for your answer!! I can finish the step that transform to TRT model^^ If I want jetbot to bypass obstacles,how can I do I'm a elementary python user sorry.Please teach me some python knowledge thk!