PacktPublishing / Generative-Adversarial-Networks-Projects

Generative Adversarial Networks Projects, published by Packt
MIT License
220 stars 144 forks source link

Cahpter 3: load_images take hours #103

Open leminhds opened 3 years ago

leminhds commented 3 years ago

The function load_images take an absurd amount of time to complete running. I have used the exact code provided here, and the function has been running for a few hours now. Is there a more effective way to load the images? Or after the first run, can we save the data in a format that can be easily reloaded later on?

` def load_images(data_dir, image_paths, image_shape): images = None

for i, image_path in enumerate(image_paths):
    print()
    try:
        # Load image
        loaded_image = image.load_img(os.path.join(data_dir, image_path), target_size=image_shape)

        # Convert PIL image to numpy ndarray
        loaded_image = image.img_to_array(loaded_image)

        # Add another dimension (Add batch dimension)
        loaded_image = np.expand_dims(loaded_image, axis=0)

        # Concatenate all images into one tensor
        if images is None:
            images = loaded_image
        else:
            images = np.concatenate([images, loaded_image], axis=0)
    except Exception as e:
        print("Error:", i, e)

return images`
kailashahirwar commented 3 years ago

Hi @leminhds

It might be taking time due to the concatenation operation.

Yes, you can store the big tensor locally and load it to use later. You can store it in NumPy or JSON format.

193947 commented 1 year ago

modify:


  def load_images(data_dir, image_paths, image_shape):
      # images = None
      images = np.empty(shape=(len(image_paths), 64,64,3))
      for i, image_path in enumerate(image_paths):
          print()
          try:
              # Load image
              loaded_image = image.load_img(os.path.join(data_dir, image_path), target_size=image_shape)
              # Convert PIL image to numpy ndarray
              loaded_image = image.img_to_array(loaded_image)

              # Add another dimension (Add batch dimension)
              loaded_image = np.expand_dims(loaded_image, axis=0)

              # Concatenate all images into one tensor
              if images is None:
                  images = loaded_image
              else:
                  # images = np.concatenate([images, loaded_image], axis=0)
                  images[i,:,:,:] =loaded_image

          except Exception as e:
              print("Error:", i, e)

      return images