Justin-Tan / generative-compression

TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
MIT License
511 stars 108 forks source link

how to use?i cannot running success #2

Closed sixleaves closed 6 years ago

sixleaves commented 6 years ago

here is error message, i did't know how to solve

 No such file or directory
Traceback (most recent call last):
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
    return fn(*args)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: data/leftImg8bit/train/strasbourg/strasbourg_000000_010372_leftImg8bit.png; No such file or directory
     [[Node: ReadFile = ReadFile[](arg0)]]
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?,?,3]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](IteratorFromStringHandle)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 106, in <module>
    main()
  File "train.py", line 103, in main
    train(config_train, args)
  File "train.py", line 57, in train
    start_time, epoch, args.name, G_loss_best, D_loss_best)
  File "/Users/sixleaves/generative-compression/utils.py", line 78, in run_diagnostics
    G_loss, D_loss, summary = sess.run([model.G_loss, model.D_loss, model.merge_op], feed_dict=feed_dict_test)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
    run_metadata_ptr)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
    run_metadata)
  File "/Users/sixleaves/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: data/leftImg8bit/train/strasbourg/strasbourg_000000_010372_leftImg8bit.png; No such file or directory
     [[Node: ReadFile = ReadFile[](arg0)]]
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,?,?,3]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](IteratorFromStringHandle)]]
Justin-Tan commented 6 years ago

Read the data / setup part of the readme, if you want to train on the cityscapes dataset you have to:

  1. Register and download the leftimg8bit dataset
  2. Resize each image to 512 x 1024 (optional)
  3. Create a Pandas Dataframe holding the relative/absolute paths to those images, then save this as a h5 file Example dataframes is provided in the data/ directory.
  4. Edit directories.train in config.py to point at the saved dataframe.
sixleaves commented 6 years ago

By the way, if i want to train on the dataset that myself's data.how i could do?

sixleaves commented 6 years ago

what's your data structure, i don't know how to create it

    f = h5py.File("./data/mytest.h5", "w")
    grp = f.create_group("df")
    df = pd.DataFrame(["1.png", "2.png"], columns=('path'))
    grp.create_dataset(name="path", data=df)
sixleaves commented 6 years ago

these has two method i write can auto make the h5 file.

    @staticmethod
    def list_dir(rootDir, extensions):
        if rootDir.endswith("/"):
            rootDir = rootDir.rstrip("/")
        filenames = os.listdir(rootDir)
        filePaths = []
        for filename in filenames:
            for extension in extensions:
                if filename.endswith(extension):
                    filePaths.append(rootDir + "/"+ filename)
                    break
        return filePaths

    @staticmethod
    def write_filePaths_to_H5(dir, file_extensions, h5_file_name):
        filePaths = Data.list_dir(dir, file_extensions)
        filePaths = np.array(filePaths)
        b = pd.DataFrame({"path": filePaths})
        h5 = pd.HDFStore(h5_file_name, 'w')
        h5['df'] = b
        h5.close()
Justin-Tan commented 6 years ago

If you want to train on your personal dataset then just do steps 3 and 4 above. You will need to resize each image so the dimensions are a multiple of 16 because the encoder produces a feature map of H/16 X W/16 X C={2,4,8,16} from a [H, W, C=channels] image.

The easiest way to create a dataframe holding the path names would be to glob the files files = glob.glob(/path/to/your/dataset/*{extension}) then call the Dataframe constructor in Pandas, name the single column 'path', then use df.to_hdf.

sixleaves commented 6 years ago

So how do I separate training and testing datasets, I don't know what this principle is, I don't care, but I want to know how to separate? I can directly compress the resources in your game project according to your rules

sixleaves commented 6 years ago

What is the compressed image format?