anh-nn01 / Satellite-Imagery-to-Map-Translation-using-Pix2Pix-GAN-framework

PyTorch Implementation of Pix2Pix framework to train a U-Net with Generative Adversarial Network to map Satellite Imagery to an equivalent Map.
42 stars 8 forks source link

Invalid links for dataset and weights in readme #1

Open nazikus opened 2 months ago

nazikus commented 2 months ago

Hi,

Access to the google drive in readme is restricted. Is it possible update the links and get access to data, to reproduces the results from this repo?

Click this link to download the trained weights for the Sat2Map Generator and Discriminator: Download Weights Dataset: Download Sat2Map Dataset

Thank you

rolyhudson commented 1 month ago

Echo that request. Please provide access to the dataset

anh-nn01 commented 1 month ago

Thank you for your interest in my repo! If I remember correctly, this is the link to download the original dataset: http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/maps.tar.gz.

I do not remember exactly if there was any preprocessing step between the above original dataset and my dataset, but I guarantee that if there was, it was just very minor cropping operations to make the loading easier.

Unfortunately, the trained weights files are no longer available as they were in the Drive of one of the companies I interned for. The account is now removed since I no longer work for the them. However, you can you the notebook and the dataset to train a new weight.

If there is any further issue, please feel free to open a new issue!

Thank you very much again!

rolyhudson commented 1 month ago

That is great thanks. I now have the dataset.

When I run step three in your notebook to Import Dataset and load them into batches. I get the error: FileNotFoundError: Couldn't find any class folder in ./extracted_data/maps/train.

FileNotFoundError Traceback (most recent call last) Cell In[4], line 11 1 data_dir = "./extracted_data/maps" 3 data_transform = transforms.Compose([ 4 transforms.Resize((256, 512)), 5 transforms.CenterCrop((256, 512)), (...) 8 transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) 9 ]) ---> 11 dataset_train = datasets.ImageFolder(root=os.path.join(data_dir, "train"), transform=data_transform) 12 dataset_val = datasets.ImageFolder(root=os.path.join(data_dir, "val"), transform=data_transform) 14 dataloader_train = torch.utils.data.DataLoader(dataset_train, batch_size=bs, shuffle=True, num_workers=0)

Wondering if that helps you recall any preprocessing on the images that may have been done?

Maybe splitting the image into the input and target?