Open abhishekvarma23 opened 5 years ago
How to solve the list Index error while training the network.
Traceback (most recent call last): File "train.py", line 190, in _main() File "train.py", line 65, in _main callbacks=[logging, checkpoint]) File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, kwargs) File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/engine/training.py", line 2192, in fit_generator generator_output = next(output_generator) File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/utils/data_utils.py", line 793, in get six.reraise(value.class, value, value.traceback**) File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/six.py", line 693, in reraise raise value File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/utils/data_utils.py", line 658, in _data_generator_task generator_output = next(self._generator) File "train.py", line 175, in data_generator image, box = get_random_data(annotation_lines[i], input_shape, random=True) File "/home/isemes/Music/keras-yolo3-master/yolo3/utils.py", line 39, in get_random_data image = Image.open(line[0]) IndexError: list index out of range
@qqwweee Please help me to solve the issue
Thanks in advance
what is in your train.txt? It seems to be that the train.txt line is wrong
what is in your train.txt? It seems to be that the train.txt line is wrong
###########Train.txt################################
Input data image size:1920*1080
C:\imgs1\vlcsnap-2018-07-21-18h42m15s301.png 180,485,462,741,0 1422,388,1675,625,1 C:\imgs1\vlcsnap-2018-07-23-16h37m17s162.png 175,201,462,474,0 84,808,384,1063,1 C:\imgs1\vlcsnap-2018-07-23-16h43m47s473.png 1377,699,1677,919,0 C:\imgs1\vlcsnap-2018-07-23-16h44m27s588.png 268,65,537,303,1 673,88,927,291,1 C:\imgs1\vlcsnap-2018-07-23-16h44m52s559.png 273,280,560,536,0 C:\imgs1\vlcsnap-2018-07-23-16h45m45s321.png 13,161,277,416,0 C:\imgs1\vlcsnap-2018-07-23-16h46m41s963.png 271,85,544,310,0 C:\imgs1\vlcsnap-2018-07-23-16h47m01s435.png 244,214,553,445,1 C:\imgs1\vlcsnap-2018-07-23-16h48m35s403.png 126,250,428,503,0 462,276,731,510,0 C:\imgs1\vlcsnap-2018-07-23-16h52m02s836.png 260,570,557,816,0 1404,400,1651,601,1 C:\imgs1\vlcsnap-2018-07-23-16h52m16s567.png 273,210,533,434,0 141,374,1677,605,1 C:\imgs1\vlcsnap-2018-07-23-17h39m19s389.png 276,185,554,421,1 1386,392,1668,612,1 C:\imgs1\vlcsnap-2018-07-23-17h39m39s510.png 385,199,656,412,0 1380,383,1677,625,1 C:\imgs1\vlcsnap-2018-07-23-17h39m54s920.png 366,681,684,943,0 1406,390,1688,632,1
#############Train File############################
annotation_path = ('train1.txt') log_dir = 'logs/000/' classes_path = 'model_data/classfile.txt' anchors_path = 'model_data/yolo_anchors.txt' class_names = get_classes(classes_path) num_classes = len(class_names) anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
###############Error(Training Pattern) with 120 Images ###############################
Create YOLOv3 model with 9 anchors and 2 classes. Load weights model_data/yolo_weights.h5. Freeze the first 249 layers of total 252 layers. Train on 110 samples, val on 12 samples, with batch size 1. Epoch 1/50
1/110 [..............................] - ETA: 1:02:52 - loss: 6837.2021
2/110 [..............................] - ETA: 31:18 - loss: 6823.4150
3/110 [..............................] - ETA: 20:49 - loss: 6808.2749
4/110 [>.............................] - ETA: 15:33 - loss: 6795.0779
5/110 [>.............................] - ETA: 12:24 - loss: 6776.8220
6/110 [>.............................] - ETA: 10:18 - loss: 6758.4811
7/110 [>.............................] - ETA: 8:47 - loss: 6742.9335
8/110 [=>............................] - ETA: 7:39 - loss: 6723.7268
9/110 [=>............................] - ETA: 6:46 - loss: 6705.6183
10/110 [=>............................] - ETA: 6:03 - loss: 6687.0032
11/110 [==>...........................] - ETA: 5:29 - loss: 6667.8527
12/110 [==>...........................] - ETA: 5:00 - loss: 6648.6914
13/110 [==>...........................] - ETA: 4:36 - loss: 6629.6406
14/110 [==>...........................] - ETA: 4:14 - loss: 6611.4614
15/110 [===>..........................] - ETA: 3:56 - loss: 6592.5941
16/110 [===>..........................] - ETA: 3:40 - loss: 6573.6660
17/110 [===>..........................] - ETA: 3:26 - loss: 6555.0974
18/110 [===>..........................] - ETA: 3:14 - loss: 6536.2294
19/110 [====>.........................] - ETA: 3:02 - loss: 6517.4622
20/110 [====>.........................] - ETA: 2:52 - loss: 6498.8002
21/110 [====>.........................] - ETA: 2:43 - loss: 6480.1095
22/110 [=====>........................] - ETA: 2:35 - loss: 6463.1071
23/110 [=====>........................] - ETA: 2:27 - loss: 6444.5291
24/110 [=====>........................] - ETA: 2:20 - loss: 6425.9446
25/110 [=====>........................] - ETA: 2:14 - loss: 6407.5146
26/110 [======>.......................] - ETA: 2:08 - loss: 6389.1994
27/110 [======>.......................] - ETA: 2:03 - loss: 6370.9553
28/110 [======>.......................] - ETA: 1:58 - loss: 6353.2001
29/110 [======>.......................] - ETA: 1:53 - loss: 6334.9555
30/110 [=======>......................] - ETA: 1:49 - loss: 6316.9706
31/110 [=======>......................] - ETA: 1:45 - loss: 6298.9556
32/110 [=======>......................] - ETA: 1:41 - loss: 6281.1080
33/110 [========>.....................] - ETA: 1:37 - loss: 6263.1779
34/110 [========>.....................] - ETA: 1:34 - loss: 6245.3073
35/110 [========>.....................] - ETA: 1:30 - loss: 6227.5681
36/110 [========>.....................] - ETA: 1:27 - loss: 6209.8360
37/110 [=========>....................] - ETA: 1:24 - loss: 6192.2246
38/110 [=========>....................] - ETA: 1:22 - loss: 6174.7103
39/110 [=========>....................] - ETA: 1:19 - loss: 6158.0128
40/110 [=========>....................] - ETA: 1:16 - loss: 6140.5929
41/110 [==========>...................] - ETA: 1:14 - loss: 6123.1823
42/110 [==========>...................] - ETA: 1:12 - loss: 6105.9018
43/110 [==========>...................] - ETA: 1:09 - loss: 6088.9188
44/110 [===========>..................] - ETA: 1:07 - loss: 6072.4373
45/110 [===========>..................] - ETA: 1:05 - loss: 6055.4702
46/110 [===========>..................] - ETA: 1:03 - loss: 6038.4319
47/110 [===========>..................] - ETA: 1:01 - loss: 6021.5183
48/110 [============>.................] - ETA: 59s - loss: 6005.1967
49/110 [============>.................] - ETA: 58s - loss: 5988.3882
50/110 [============>.................] - ETA: 56s - loss: 5971.6597
51/110 [============>.................] - ETA: 55s - loss: 5955.0506
52/110 [=============>................] - ETA: 53s - loss: 5938.4933
53/110 [=============>................] - ETA: 51s - loss: 5921.9256
54/110 [=============>................] - ETA: 50s - loss: 5905.4772
Warning (from warnings module):
File "C:\Users\Abhishek\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\callbacks.py", line 122
% delta_t_median)
UserWarning: Method on_batch_end() is slow compared to the batch update (0.182816). Check your callbacks.
55/110 [==============>...............] - ETA: 49s - loss: 5889.1740
Warning (from warnings module):
File "C:\Users\Abhishek\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\callbacks.py", line 122
% delta_t_median)
UserWarning: Method on_batch_end() is slow compared to the batch update (0.172746). Check your callbacks.
56/110 [==============>...............] - ETA: 47s - loss: 5872.9922
57/110 [==============>...............] - ETA: 46s - loss: 5856.7600
58/110 [==============>...............] - ETA: 44s - loss: 5840.6051
Warning (from warnings module):
File "C:\Users\Abhishek\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\callbacks.py", line 122
% delta_t_median)
UserWarning: Method on_batch_end() is slow compared to the batch update (0.163192). Check your callbacks.
59/110 [===============>..............] - ETA: 43s - loss: 5824.5102
60/110 [===============>..............] - ETA: 41s - loss: 5808.4763
61/110 [===============>..............] - ETA: 40s - loss: 5792.5592
62/110 [===============>..............] - ETA: 39s - loss: 5776.7299
63/110 [================>.............] - ETA: 38s - loss: 5760.9184
64/110 [================>.............] - ETA: 36s - loss: 5745.1704
65/110 [================>.............] - ETA: 35s - loss: 5729.4931
66/110 [=================>............] - ETA: 34s - loss: 5714.0763
67/110 [=================>............] - ETA: 33s - loss: 5698.5413
68/110 [=================>............] - ETA: 32s - loss: 5683.0661
69/110 [=================>............] - ETA: 31s - loss: 5667.6950
70/110 [==================>...........] - ETA: 30s - loss: 5652.3257
71/110 [==================>...........] - ETA: 29s - loss: 5637.1168
72/110 [==================>...........] - ETA: 28s - loss: 5621.9181Traceback (most recent call last):
File "C:\Users\Abhishek\Documents\keras-yolo3-f4a9c40f4615cdbb774942507ecad3af5f05c990\train.py", line 190, in
@sunflowercao thanks for the reply The above two represents my train.py and train.txt file
Can you please check and help me about the issue
Thanks in advance
@AryaCao do i need to change to input shape (19201080) to (19201088) #multiple of 32 ?
the train.txt only contain the train_data_line like this: do you have contain the line:'Input data image size:1920*1080'? "line[0]:list index out of range:" , may be the line is empty, you should check your train.txt, delete the empty line
@AryaCao Thank you so much.
Saved extra lines in the train.txt. Solved the issue now.
Hello! I have same issue.
Traceback (most recent call last):
File "train.py", line 192, in <module>
_main()
File "train.py", line 33, in _main
freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze
File "train.py", line 132, in create_model
[*model_body.output, *y_true])
File "/Users/main/.pyenv/versions/3.5.2/lib/python3.5/site-packages/keras/engine/topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
File "/Users/main/.pyenv/versions/3.5.2/lib/python3.5/site-packages/keras/layers/core.py", line 663, in call
return self.function(inputs, **arguments)
File "/Users/main/project/keras-yolo3/yolo3/model.py", line 366, in yolo_loss
input_shape = K.cast(K.shape(yolo_outputs[0])[1:3] * 32, K.dtype(y_true[0]))
IndexError: list index out of range
But in my case train.txt
is like this.
/Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252039.jpg 45,0,232,268,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252038.jpg 46,141,229,266,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252037.jpg 82,166,175,275,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252036.jpg 31,134,193,237,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252035.jpg 0,0,149,319,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252034.jpg 0,0,251,320,0 /Users/main/project/keras-yolo3/VOCdevkit/VOC2007/JPEGImages/a01%2520copy%252033.jpg 30,9,253,320,0
Is there any problem or another reason?
Thanks in advance.
I found empty line of the file. Then I deleted and run train.py again. But I have still same error.
I found my fault. I had rewrite anchor_path as anotetion_path. Sorry, It was easy mistake.
annotation_path = 'train.txt'
log_dir = 'logs/000/'
classes_path = 'model_data/voc_classes.txt'
anchors_path = 'model_data/yolo_anchors.txt' # as 'train.txt'(ERROR)
How can i Solve this problem?
Epoch 1/51
Traceback (most recent call last):
File "Train_YOLO.py", line 217, in
How can i Solve this problem? Epoch 1/51 Traceback (most recent call last): File "Train_YOLO.py", line 217, in callbacks=[logging, checkpoint], File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\kera s\legacy\interfaces.py", line 91, in wrapper return func(*args, kwargs) File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\kera s\engine\training.py", line 2192, in fit_generator generator_output = next(output_generator) File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\kera s\utils\data_utils.py", line 793, in get six.reraise(value.class, value, value.traceback**) File "C:\Users\AppData\Roaming\Python\Python37\site-packages\six.py", l ine 703, in reraise raise value File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\kera s\utils\data_utils.py", line 658, in _data_generator_task generator_output = next(self._generator) File "D:\Notsaved\Downloads\TrainYourOwnYOLO-master\TrainYourOwnYOLO-ma ster\Utils\Train_Utils.py", line 194, in data_generator image, box = get_random_data(annotation_lines[i], input_shape, random=True) File "D:\Notsaved\Downloads\TrainYourOwnYOLO-master\TrainYourOwnYOLO-ma ster\2_Training\src\keras_yolo3\yolo3\utils.py", line 66, in get_random_data image = Image.open(line[0]) File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL Image.py", line 2809, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'D:/Notsaved/naiduv/Down loads/TrainYourOwnYOLO/-master/'
I have the same error ,can you solve it?
I had the same error because I tried to merge tow train.txt files, so when i used voc_annotation.py to generate one train.txt file for all datasets the problem was solved.
U need to work in ubuntu
On Wed, 3 Jun 2020, 12:57 raffataldakka, notifications@github.com wrote:
I had the same error because I tried to merge tow train.txt files, so when i used voc_annotation.py to generate one train.txt file for all datasets the problem was solved.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/qqwweee/keras-yolo3/issues/299#issuecomment-638012889, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALRXXESQYFLFOQVNZJZRFQDRUX3PZANCNFSM4GK3NAIA .
(yolov3-tf2-cpu) N:\MachineLearning\TrainYourOwnYOLO\TrainYourOwnYOLO\2_Training>python Train_YOLO.py
Using TensorFlow backend.
Create YOLOv3 model with 9 anchors and 1 classes.
Load weights N:\MachineLearning\TrainYourOwnYOLO\TrainYourOwnYOLO\2_Training\src\keras_yolo3\yolo.h5.
Freeze the first 249 layers of total 252 layers.
8888888888888888888***98888888888888888888888888888888888
['N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYO
LO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYour
OwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/Trai
nYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning
/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLea
rning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/Machi
neLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/
MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/',
'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYO
LO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYour
OwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/Trai
nYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning
/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLea
rning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/Machi
neLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/
MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/',
'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYOLO/', 'N:/MachineLearning/TrainYourOwnYO
LO/']
Train on 90 samples, val on 10 samples, with batch size 32.
Traceback (most recent call last):
File "Train_YOLO.py", line 265, in
help me! thank you!
How to solve the list Index error while training the network.
Traceback (most recent call last): File "train.py", line 190, in
_main()
File "train.py", line 65, in _main
callbacks=[logging, checkpoint])
File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/engine/training.py", line 2192, in fit_generator
generator_output = next(output_generator)
File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/utils/data_utils.py", line 793, in get
six.reraise(value.class, value, value.traceback)
File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/six.py", line 693, in reraise
raise value
File "/home/isemes/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/keras/utils/data_utils.py", line 658, in _data_generator_task
generator_output = next(self._generator)
File "train.py", line 175, in data_generator
image, box = get_random_data(annotation_lines[i], input_shape, random=True)
File "/home/isemes/Music/keras-yolo3-master/yolo3/utils.py", line 39, in get_random_data
image = Image.open(line[0])
IndexError: list index out of range