Closed uzair789 closed 7 years ago
Hi @uzair789, Not sure what causes this but I have seen similar problems arising when there are corrupt files in the dataset. Then the queue runners that fills up batch queue crashes one by one. And that happens with just an error message but training continues. But when this happens for a few times there are no queue runners left and then training crashes. One way to find out is to check the training log for error messages related to decode_image. You could also make a test script with a very simple pipeline, ie. just one thread, reading the files in your dataset and using decode_image. Then you should be able to troubleshoot this much quicker.
I was getting this error (I think) because: a) using JPGs instead of PNGs b) some greyscale images mixed w/ RGB images
I fixed by changing
image = tf.image.decode_png(file_contents)
to
# Support other formats; Force three channels
image = tf.image.decode_image(file_contents, channels=3)
at https://github.com/davidsandberg/facenet/blob/master/src/train_softmax.py#L124.
Also, I commented out https://github.com/davidsandberg/facenet/blob/master/src/train_softmax.py#L135, (since I couldn't figure out what it was supposed to be doing).
Thanks David and bkj. I was able to move past the iteration I was getting stuck at by following bkj's advice. I added the 'channels = 3' parameter into the decode_image() function and it seems to work now. Will still need to wait and see if the whole training process runs without getting stuck at some other iteration.
closing this as this problem was solved by following bkj's advice.
I had a similar problem; I resolved it by changing the line:
filename_queue = tf.train.string_input_producer([tfrecords_filename], num_epochs=num_epochs)
to
filename_queue = tf.train.string_input_producer([tfrecords_filename])
In the tensorflow document for tf.train.string_input_producer, it says:
num_epochs: .... If not specified, string_input_producer can cycle through the strings in string_tensor an unlimited number of times.
That fixed my issue since I did not necessarily have this error in the first round; but very randomly on the subsequent epochs!
I was facing this error in another code sample. num_epochs=2 doesn't throw this error. Did not get time to debug the issue.
I got the same error with train_tripletloss. I'm already using decode_image function with channels=3. All pictures are RGB. Function string_input_producer is not used. Anyone can help?
For the record, I finally found my error. It was related to this damned hidden .DS_Store file that MacOS creates automatically. Removed it from my dataset directory and it works now.
@aginpatrick how did you go about discovering that was the reason?
@maxisme I recreated by hand a new directory with a new dataset (which was a copy of half of the original because I was suspecting something related to image format or image dimensions, something like that). It worked. I updated this dataset to include 3/4 of the original (worked) and so on. With a copy of 100% of the old dataset, it still worked! Then I began to suspect something related to hidden files that I could have in my original directory/dataset. Bingo. It was .DS_Store. Dammit!
Haha. Couldn't this be solved by making the get_dataset
-> get_image_paths
a bit tighter? I just tried:
image_paths = [os.path.join(facedir,img) for img in images if ".jpg" in img]
in replacement for https://github.com/davidsandberg/facenet/blob/master/src/facenet.py#L336 and still getting the error :(
Hmmm. I suggest to try what I did: run your code with a minimal dataset and augment it progressively.
Still throwing the error!
I have evaluated the dataset to look for corrupt files (as @davidsandberg said) but I can't find any:
for image in image_paths:
if str(type(cv2.imread(image))) != "<type 'numpy.ndarray'>":
print (image)
Even using the vggface dataset it does this? I have noticed in the download code you convert to 250px and also make a png but the image is then converted to 160px https://github.com/davidsandberg/facenet/blob/master/src/train_tripletloss.py#L118 and filetype is irrelevant here https://github.com/davidsandberg/facenet/blob/master/src/train_tripletloss.py#L108 please can anyone else help I have been attempting to find the bug forever!
I saw same error but found out that path for .record files and num_classes were wrong.
@maxisme Have you solved the problem? I met the same error...
In my case it happens that both input images are groundtruth images were not having the same dimension (720 x 720 vs 360 x 360) (working on deeplab-resnet-master project, which is based on semantic segmentation)
@maxisme, I also meet this problem today and just as what@davidsandberg said, there are some corrupt images, i.e., some image which cannot be read due to some unknown reasons, in my dataset and I write a simple script to find those images. You can try it. The script is not well organized but you can write one based on the same idea. Hope it is useful.
import os import shutil import numpy as np import matplotlib.pyplot as plt from matplotlib.image import imread
data_dir='Your data dir' flds = os.listdir(data_dir)
for fld in flds: sub_flds = os.listdir(data_dir+'/'+fld) try: for i in sub_flds: i_path = data_dir +'/'+fld+ '/' + i img = imread(i_path)
except:
print(data_dir+'/'+fld)
shutil.rmtree(data_dir+'/'+fld) #Delete folders
@maxisme, I also meet this problem today and just as what@davidsandberg said, there are some corrupt images, i.e., some image which cannot be read due to some unknown reasons, in my dataset and I write a simple script to find those images. You can try it. The script is not well organized but you can write one based on the same idea. Hope it is useful.
import os import shutil import numpy as np import matplotlib.pyplot as plt from matplotlib.image import imread
os.remove(path) #Delete file
os.removedirs(path) #Delete empty folder
data_dir='Your data dir' flds = os.listdir(data_dir)
All folders' paths
for fld in flds: sub_flds = os.listdir(data_dir+'/'+fld) try: for i in sub_flds: i_path = data_dir +'/'+fld+ '/' + i img = imread(i_path)
print(np.shape(img))
except: print(data_dir+'/'+fld) shutil.rmtree(data_dir+'/'+fld) #Delete folders
This actually solved my problem! Thanks a lot for sharing. A better presentation for those need this code:
import argparse
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import imread
#os.remove(path) #Delete file
#os.removedirs(path) #Delete empty folder
def find_corrupt(folder_path):
data_dir = folder_path
flds = os.listdir(data_dir)
for fld in flds:
sub_flds = os.listdir(data_dir + '/' + fld)
try:
for i in sub_flds:
i_path = data_dir + '/' + fld + '/' + i
img = imread(i_path)
#print(np.shape(img))
except:
print(data_dir + '/' + fld)
shutil.rmtree(data_dir + '/' + fld) #Delete folders
if __name__ == "__main__":
PARSER = argparse.ArgumentParser(description="____")
PARSER.add_argument('-f', '--folder_path')
ARGS = PARSER.parse_args()
find_corrupt(str(ARGS.folder_path))
The terminal input: find yourdatasets/ -size -1
For the record, I finally found my error. It was related to this damned hidden .DS_Store file that MacOS creates automatically. Removed it from my dataset directory and it works now.
I checked my dataset dir and surprisingly found hidden .DS_store! I wonder for a while why my Ubuntu server would have this .DS_store and finally I realize this dataset was uploaded from my local Mac!
Thanks a lot, by rm .DS_Store I solve this problem.
For the record, I finally found my error. It was related to this damned hidden .DS_Store file that MacOS creates automatically. Removed it from my dataset directory and it works now.
yes, this is the solution I got for this problem. Thanks @aginpatrick
Hi, I have the same issue while running open_pose training code on my own dataset. but my dataset is .mat file as depth image(grayscale). I load it with the scipy.io module in python. and repeat it on two other channels to convert 3 channels. but I receive this issue during training Can someone help me? thank you
parser.add_argument('--epoch_size', type=int,
help='Number of batches per epoch.', default=1000)
this is code in train_tripletloss of facenet project.
Hi,
I am trying to train a facenet model on my own dataset. My dataset consists of images which were obtained by using a face detector developed at our lab at CMU. There is no problem with the generated crops. I have used the same dataset for training different models in Caffe.
When I change the data_dir path to my own dataset, the training starts and aborts at the third iteration in the first epoch itself. This is the run command that I use:
I have looked at other solutions where people suggest reducing the --epoch_size value but I see that in the code the
function does not depend on num_epochs. So this is not a valid solution any more. Also I am using 'jpeg' images in my dataset and I have already changed the line
to
I have the exact error message with the stacktrace below:
Id really appreciate any help that I can get. I really need to move past this error so that I can train on different datasets that are available at my lab.