Open gckken95 opened 6 years ago
From my experience this error usually happens when you run the notebook cell for multiple times. Try restarting the kernel and to run it again. Every time you build the network you have to restart the the kernel or otherwise the code will try to keep adding layers to it. Hope this helps!
https://github.com/tflearn/tflearn/issues/360
请看这个链接,建议你把notebook重启一下再试试
我参考的是你ssd-mobilenet教程,然后从划分数据集 到生成csv 到生成tfrecord都是分开,而且我基本都是用命令行执行py程序 就感觉不太像reset kernel问题
@gccken95 请问你解决了吗?我也碰到了同样的问题
自己的数据集的是2560*1920,class是中文,经过调整能生成tfrecord,但在运行train时会发生这个错误,对比了一下视频中数据格式,也没有偏差,看了blog,因为train,test数据都是通过python随机分的,应该不会存在多余的问题,想请教是数据是哪里超界了? 下面是traceback的内容,还请大佬指点一下 Traceback (most recent call last): File "train.py", line 183, in
tf.app.run()
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\tensorflow\python\platf
orm\app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 179, in main
graph_hook_fn=graph_rewriter_fn)
File "D:\chenxf\gck\models-master\research\object_detection\trainer.py", line
276, in train
train_config.prefetch_queue_capacity, data_augmentation_options)
File "D:\chenxf\gck\models-master\research\object_detection\trainer.py", line
59, in create_input_queue
tensor_dict = create_tensor_dict_fn()
File "train.py", line 120, in get_next
dataset_builder.build(config)).get_next()
File "D:\chenxf\gck\models-master\research\object_detection\builders\dataset_b
uilder.py", line 123, in build
num_additional_channels=input_reader_config.num_additional_channels)
File "D:\chenxf\gck\models-master\research\object_detection\data_decoders\tf_e
xample_decoder.py", line 271, in init
use_display_name)
File "D:\chenxf\gck\models-master\research\object_detection\utils\label_map_ut
il.py", line 152, in get_label_map_dict
label_map = load_labelmap(label_map_path)
File "D:\chenxf\gck\models-master\research\object_detection\utils\label_map_ut
il.py", line 135, in load_labelmap
text_format.Merge(label_map_string, label_map)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 533, in Merge
descriptor_pool=descriptor_pool)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 587, in MergeLines
return parser.MergeLines(lines, message)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 620, in MergeLines
self._ParseOrMerge(lines, message)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 735, in _MergeField
merger(tokenizer, message, field)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 823, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 735, in _MergeField
merger(tokenizer, message, field)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 874, in _MergeScalarField
value = tokenizer.ConsumeString()
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 1237, in ConsumeString
the_bytes = self.ConsumeByteString()
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 1252, in ConsumeByteString
the_list = [self._ConsumeSingleByteString()]
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_fo
rmat.py", line 1277, in _ConsumeSingleByteString
result = text_encoding.CUnescape(text[1:-1])
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_en
coding.py", line 103, in CUnescape
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
File "D:\chenxf\gck\anaconda\envs\dl\lib\site-packages\google\protobuf\text_en
coding.py", line 103, in
result = ''.join(_cescape_highbit_to_str[ord(c)] for c in result)
IndexError: list index out of range