Closed Eric-Ryan closed 2 years ago
I'm dealing with the same problem. I removed that line
prediction = np.array(prediction) * np.array([4.5, 0.1, 0.1, 0.1, 1.8, 1.8, 0.5, 0.5, 0.2])
but still the same problem. I think the point is to collect at least 10.000 per posible decision (straight, straight-right,etc) in order to give more train cases to the CNN.
If that were the case, I would still expect the model to be capable of going left or right rather than only selecting 'no keys'. I believe I have tried training with and without that line of code and still 'no keys'.
Also, do you think with the 10,000 data I should keep that line of code? or comment it out? I greatly appreciate the response and will try it out tomorrow, thank you.
Have you tried running the same code in a different game? As we havent specified that it should follow the ball or something it would be Kinda useless in its current state. Also in rocketleuage it isnt a pattern like when the line turns like this i turn like this, therefore i think it is hard for the AI to think out what to do as the training data probably have alot of ways to drive at the same place. Sorry for the half bad written answer. This Phone keyboard hates when writing in another language
Den 2 aug. 2017 22:09 skrev Eric-Ryan notifications@github.com:
When I test my model the car (Rocket League) only chooses straight. If I only train by pressing left and right, rather than only going left or only going right - my model (Inception_v3) chooses to press 'no keys' the whole time. I am assuming the problem has to do something with this line of code, perhaps because I am using a different game and training using 800x600 rather than 1920x1080.
prediction = np.array(prediction) * np.array([4.5, 0.1, 0.1, 0.1, 1.8, 1.8, 0.5, 0.5, 0.2])
I don't think I understand the purpose of this line of code. Any help is greatly appreciated.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/Sentdex/pygta5/issues/62, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ATxZZxqVZsDf2ZUZGx5ThDLcxYrFk6Emks5sUNd7gaJpZM4Orl6_.
@Eric-Ryan i think you should remove that line of code and collect more trainning data, i think the key is more trainning data if you keep as possible outputs the 9 movements available till now (straight, right, left, WA,WD,AS,SD,no-keys). One thing that we could do is to reduce the outputs to (right, left, straight, no -keys) and see what happen with the accuracy; trainning with the same ammount of trainning data. If the acc reachs to at least 80% and remains that value, could be evidence that we have to collect much more trainning data for 9 outputs.
We must check that for every frame we are saving the correct output, a mean, that when we process a frame in which we want to turn left, we saved left as output to that frame.....the time that takes processing 'grab_screen' may associate frame with different outputs...one solution is to 'checkkeys' immediately before grabbing screen, that worked for me.....
I'm doing it in GTA VC
Tommy keeps going only forward, i still can not solve the problem....
The line:
prediction = np.array(prediction) * np.array([4.5, 0.1, 0.1, 0.1, 1.8, 1.8, 0.5, 0.5, 0.2])
is to provide weights to the one-hot array prediction for smoother turning by emulating a gamepad/joystick.
Although it is strange, I did have the model working somewhat about a week ago, but any new attempts at training a new model gives me no change in my outputs. It may have something to do with the issue @giordand posted today where the reshape width and height parameters need to be switched.
I also have gamepad tracking (X1 Controller) using PYXInput and using that instead of a one-hot, ranges from 0.0 to 1.0 being fed into the model, then emulating a controller using vXboxInterface. Don't ask how I got it to work, many BSODs, and still about 25% of the time when unplugging the physical controller I get a BSOD.
Make sure the training data is balanced. Activation functions and gradient clipping also helped me avoiding shortcuts where the network would only predict fixed values. But keyboard inputs generally worked way worse than working strictly with controller data in my experience. With controller data i started seeing some accurate behaviour after a few hours of training whereas with keyboard data the model wouldn't do much at that point.
Are you running it in full screen or windowed?
windowed to 800x600 resizing to 240x120...the data is balanced to my way at collecting time, what i'm doing is counting the number of examples of outputs that i'm collecting at every moment (at every grab_screen) and i don't append the training_data array unless the example is not equal to the maximum example type count , i.e if the count array is [3,1,1,1,1,1,0,0,0] and the example to append is a W , i do not append training_data, if the example is any movement different to W, i append it...
Is the loss decreasing at train time? How big is train/validation discrepancy in accuracy? How much training data do you have? Did you make sure you to have multiple day/night/weather conditions or atleast use the same at test time than you did for training? Do you use a batchnormalization layer? Have you manually checked whether the input images your model gets at test time to look correct? You could also simplify the task and only try to do steering first and then work your way up.
@GodelBose answers by line:
30k frames is really not that much that could definitely explain poor generalization Im currently training on 400k+ which is still a lot less than what the guys at nvidia did use only for learning steering: https://arxiv.org/pdf/1604.07316.pdf. Also yes changing conditions can also mess up your network pretty much definitely take that into account. On the other note you should also try out adding normalization layers then it can make a huge difference in my experience for some vision tasks. If you use keras: https://keras.io/layers/normalization/#batchnormalization or tflearn: http://tflearn.org/layers/normalization/ or tensorflow: https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization
@GodelBose Is your running? I have one at 400k+ that keeps going straight
@frossaren Yes mine is following waypoints but still crashes way too often into walls/cars etc. Are you using keyboard or controller data?
@GodelBose Keyboard. mine doesnt give a crap about anything and just drives forward like a tank
Loss is about the same all the time too. Dont know what might be the issue.
@frossaren Maybe try out capturing new data with a controller and then use a virtual controller at test time. My network was barely making progress on keyboard data after a day of training but with controller data the network was driving Okish after a couple of hours. For the loss part that happened to me on both data types occassionally and usually means it just learns one fixed action to all sorts of inputs. For me experimenting with layer activations/normalization/gradient clipping helped and I also started training with a focus on frames with left/right steering and included forward motion only some percentage of the time.
How did you manage to sort out the straight frames? Doing it manually would take ages so i guess you have a code for it?
yes since its controller data the x,y axis values lie between [-1,1] for generating the training batches I iterate over each random training instance and then compute
radius = np.sqrt(x_i**2 + y_i**2) volume = np.pi * radius**2 ratio = volume / normalizer rand_f = np.random.uniform(0,1) if rand_f > ratio: continue
. This filters most of the data with very little steering since then the radius of the controller joystick will be close to zero. How much you want to reject can be controlled by the normalization.
For keyboard data you could essentially do the same by just sampling some random number in [0,1] for each straight forward instance and then decide how big your rejection criterium is.
@GodelBose Thank you!
@frossaren had same issue, but using alexnet2, found out, that problem was in data, after few trainings on new data (about 10 epochs, 28500 frames, not so much, yeah) net started to act pretty well, a, w, d for now =) (crossout game)
On unballanced data it also goes straight, seems this step important, don't know why it has been removed =/
When I test my model the car (Rocket League) only chooses straight. If I only train by pressing left and right, rather than only going left or only going right - my model (Inception_v3) chooses to press 'no keys' the whole time. I am assuming the problem has to do something with this line of code, perhaps because I am using a different game and training using 800x600 rather than 1920x1080.
I don't think I understand the purpose of this line of code. Any help is greatly appreciated.