Closed khalyl-hamdy closed 5 years ago
@khalyl-hamd Those two files can be automatically generated when you create your own tensorflow records:)
one more question the Data (that i am using) contains set of images with their labels inside a txt files. i mean every image has its own txt file : both of them have the sme name is it the right form of data that i should use to convert the whole dataset into tensorflow records?
@khalyl-hamdy You'd better convert your dataset into Synth90k dataset's format first and you can use the tools in this repo to generate the tfrecords. Otherwise you may have to implement your own data feed pip line:)
Well i prepared the data and created the TFrecords files then i tried to train the model but i got this message and i dont know what's wrong , can you check it for me (i am using tensorflow 1.10.0) ?
`I0619 10:06:19.095118 6471 train_shadownet.py:572] Use single gpu to train the model
2019-06-19 10:06:22.699408: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-06-19 10:06:22.857261: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
I0619 10:06:23.081531 6471 train_shadownet.py:271] Training from scratch
2019-06-19 10:06:28.731836: W tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at ctc_loss_op.cc:166 : Invalid argument: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 37 labels: 27
Traceback (most recent call last):
File "/home/khalyl/anaconda3/envs/rcnnenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call
return fn(*args)
File "/home/khalyl/anaconda3/envs/rcnnenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1263, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/khalyl/anaconda3/envs/rcnnenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence
[[Node: val_IteratorGetNext = IteratorGetNextoutput_shapes=[[4,32,100,3],
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tools/train_shadownet.py", line 578, in
Caused by op 'val_IteratorGetNext', defined at:
File "tools/train_shadownet.py", line 578, in
OutOfRangeError (see above for traceback): End of sequence
[[Node: val_IteratorGetNext = IteratorGetNextoutput_shapes=[[4,32,100,3],
@khalyl-hamdy Your config file is wrongly generated please check yourself:)
well i only changed the train.batch_size to 4 what are the other parameters do you think i need to change besides the batch_size?
thanx alot
@khalyl-hamdy class nums should also be adjusted according to your own dataset:)
i changed that , Also i don't have the same images shape (they have the same height with different width ) , is this a problem ?
@khalyl-hamdy They will be resized into the same shape:)
well i changed the parameters and i still get the same error everytime i try to start the training , i couldn't figure where's the source of this error.
OutOfRangeError: End of sequence [[Node: val_IteratorGetNext = IteratorGetNextoutput_shapes=[[4,32,100,3], , [4]], output_types=[DT_FLOAT, DT_VARIANT, DT_STRING], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
And should i use the 1.12.0 version of tensorflow or even have lower ones it doesn't bother ?
@khalyl-hamdy My local tensorflow-gpu version is 1.12.0:)
Do you think it maybe the cause of the error ?
@khalyl-hamdy No I do not think the problem is caused by the tensorlfow version:)
well i will try to train the english dataset and see if i am going to get the same error or not. thank you very much:)
@khalyl-hamdy I recommend you to train the synth90k dataset according to readme file first which helps you understand the whole project and use your own data to train the model later:)
hey
i want to use this project to create a model for the arabic language so i wonder can these two files be automatically generated? if yes how ?
thanks in advance