Open hahafengge opened 3 years ago
I think, #38 is same problem.
Somewhere along the line, the RGB order of the images must have changed. (for example RGB to BGR...). But I don't know at what point it got worse.
OK I will inspect this problem. Can you share your vae dataset and trained model to me ? And Please inform your hardware information(USB camera or CIS camera)?
@masato-ka I am using JetRacer and CIS camera。Although my dataset cannot support a good result of training, it can show something。My vae dataset and trained model have been packaged and uploaded. VAE.zip
OK, Color problem is software bug. Orange change to blue. But it is not a problem for agent training. I will solve this problem. In addition I think your VAE model need more epoch. Please Epoch number change to to 100-300 from default 50. I think this is reconstruct image more clearly.
Thank you for your reply, I will continue to learn according to your suggestion。
@hahafengge Hello, I am also working on this github's topic. I am trying to solve my installation problem I am facing. I wonder what version of Jetbot JetPack you are using for your installation of learning_racer. I am trying with JetPack 4.5 and Docker version of the github. Did you also installed your installation of learning_racer with the docker version? Everything is ok for docker version except it make an error at the training step of learning_racer. It gives an error message : "Could not get EGL display connection". It look like an error related to camera. I could not find the solution of the error.
@gwiheo I think that the problem is compatibility issue. You need to install right version of PyTorch and CUDA (recommended by Nvidia) for JetPack 4.5.
@abdul-mannan-khan Could you share your info about the version of Torch and Torchvision which are you using at your Jetbot installation for learning-racer? I noticed that you are using JetPack 4.5.
@gwiheo First of all, sorry for late reply; I was little engaged. I am using Torch version 1.8.0 and my torchvision version is 0.11.1 I updated my JetPack to 4.5 because the previous one was causing many issues with CUDA and I was not able to use GPU. I went this forum. Method is described there to install Torch and Torchvision. Hope this would help.
I checked my VAE model, So, I found a error.
VAE class modify below code. ignore F.softplus.
def bottleneck(self, h):
mu, logvar = self.fc1(h), self.fc2(h)#F.softplus(self.fc2(h))
z = self.reparameterize(mu, logvar)
return z, mu, logvar
@abdul-mannan-khan Thank you for sharing your torch info. It is exactly same with my torch and related settings. I had a difficulty in installing torch => 1.18.1 version, and I set it as 1.8.0. I just wonder how other people are working on this problems. Anyway racer train is working with the condition, and now I am trying to do next work for the racer training. Specially I also have the same track with orange track-line with white background, and it make very poor VAE track images. I need to solve the problem also. Thanks again for your kind info.
@masato-ka Thank you so much for digging so much for us. This is really great and it is a favor. Thank you so much. @gwiheo. Good. I am glad that we are doing same thing together. Thanks to @masato-ka for helping us.
Fix visualize reconstruction image color bug in vae_viwer.ipynb at release 1.6.0.
@masato-ka Thank you Masato-ka. The release 1.6.0 works beautifully. Here is my jetbot running the Waveshare orange line track. https://youtu.be/KOFc36sQG7Q
Great works! This video indicate that software is fine works. I think v1.6.0 resolve a problem what VAE model for Waveshare course. How much spent time for agent learning ?
@masato-ka I spent about 10 ~ 20 min for the run. After about 15 ~ 20 episodes, the Jetbot start to run with following the track. It was run as like as the 4.1 Simulation of your github at where the same numbers of episodes was enough that the simulation Donkeycar can drive with following the simulation track.
Very good result ! Now, I'm implementing auto stop function for v1.7.0. This function don't need to observed course out for stop agent by human.
https://user-images.githubusercontent.com/1833346/147866023-da823647-355b-43da-a8dd-df3a0d41f88a.mp4
Wow. It is really like a happy new year @gwiheo. Thank you @masato-ka. I am sorry for being little slow here. I had to submit my year's report. I am thrilled about this experience. I cannot wait to go to lab and try this out. Wahaaaa
Hello @gwiheo, I hope you are well. I have couple of questions for you. Could you please share your VAE.torch model?. Could you please share the response of jetbot_vae_viewer.ipynb
? Secondly, did you use docker as described in the main under heading Install like this
#JETBOT_VERSION
$ sudo docker images jetbot/jetbot | grep jupyter | cut -f 8 -d ' ' | cut -f 2 -d '-'
#L4T_VERSION
$ sudo docker images jetbot/jetbot | grep jupyter | cut -f 8 -d ' ' | cut -f 3 -d '-'
I am still not getting response as described here.
Also, you mentioned that you spend about 15 ~ 20 minutes episodes training. By any chance, do you remember how many episodes did you run to train your Reinforcement Learning algorithm? Rough idea wil be fine. Thank you.
@abdul-mannan-khan I use docker environment as described at Masato-ka's github. Here is my vae.torch file https://drive.google.com/file/d/17Z9hufHRI9HaXkHFQBEfeFX_egTEjKho/view?usp=sharing After 20 episodes, jetbot had run the track following without getting off the track. One thing I would like to comment is you need to wait when jetbot(or jetracer) does not move all of sudden even you command to start. In my case jetbot had responded immediately at the initial run, but after about 10th episode it does not responded which means jetbot was doing works of calculation, and I need to wait until it finish its internal calculation. Here is my vae_viewer's image. The image quality was not always good like this, but it worked even at poor images.
@gwiheo Thank you for sharing your vae.torch file.
Just a HINT
it seems like you are using a waveshare IMX219 camera because of the red shining on the borders?
You can remove that by setting up camera color correction:
wget https://www.waveshare.com/w/upload/e/eb/Camera_overrides.tar.gz
tar zxvf Camera_overrides.tar.gz
sudo cp camera_overrides.isp /var/nvidia/nvcam/settings/
sudo chmod 664 /var/nvidia/nvcam/settings/camera_overrides.isp
sudo chown root:root /var/nvidia/nvcam/settings/camera_overrides.isp
Though I am not sure it it solves the problem with VAE preview.
Thank you for sharing the code, it is very helpful to my study. However, I encountered some problems in the process of studying. I trained a model according to your VAE_CNN.ipynb, the tensorboard display is correct, but why run in VAE_viewer.ipynb, the VAE real-time image color is wrong?Please see the real-time image displayed by the VAE at the bottom right。