Closed hujinhan12 closed 5 years ago
Hi, I guess it's likely to be related to the tfrecords generation. You might need to modify the codes for generating tfrecords according to your dataset. Using tfrecords is probably a terrible idea, and I apologize I did not realize that before. Nowadays you may find much easier ways to do random cropping on the input images. If you stick with the tfrecords, do check the sizes carefully. Another thing in my original tfrecord generation code that could easily go wrong is the way I handle border patches. You can loop through the tfrecords, and make sure they are correct before training.
Thank you for your quick reply, I will take a look and notify you.
Hello, I finally found that the problem is not caused by the data. It is the W/R speed of my hard drive. I moved the data to ssd, then the problem is solved. Thanks for your reply, again.
Good, thanks for your sharing.
Hello, Have you encountered outofrangeerror? OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 3) [[node shuffle_batch (defined at /mnt/Data/wk_linux/DeepHDR_T/DeepHDR/load_data.py:85) ]] file "/mnt/Data/wk_linux/DeepHDR_T/DeepHDR/model.py", line 120, in build_model self.in_LDRs, self.in_HDRs, self.ref_LDRs, self.refHDR, , _ = load_data(filename_queue, config) My system is ubuntu 16.04, tensorflow 1.13.1, python 3.5 I used my own dataset which has higher resolution, it contains 80 sets of images. The images are .png format. Looking forward to hearing some solutions from you, thanks.