Misaliet / Seamless-Satellite-image-Synthesis

code for paper -- "Seamless Satellite-image Synthesis"
GNU General Public License v2.0
16 stars 4 forks source link

OSError: [Errno 22] Invalid argument: '"Seamless-Satellite-image-Synthesis\\web_ui\\sss_ui\\checkpoints\\z1\\latest_net_G.pth"' #4

Open ArloOGF opened 1 year ago

ArloOGF commented 1 year ago

Hi, I'm very interested in using this software in regards to fictional mapping and generating satellite imagery from it. I'm a total noobie when it comes to code or anything, this project encouraged to pick up GitHub and try get this code to work. However, probably due to my inexperience, I keep stumbling into errors. This error is one that is preventing me from executing the "python ... migrate" function. The code can't seem to find the "latest_net_G.pth" file in my PC's path, yet it is definitely there. I'm very confused but passionate in trying to get this to work as this is some awesome work that you guys have developed. I would appreciate any help and I apologise if I approached this like too much of a coding begineer !

Misaliet commented 1 year ago

Hi,

The "lates_net_G.pth" is the pre-trained neural network weights file. It should be in the path of "Seamless-Satellite-image-Synthesis/web_ui/sss_ui/checkpoints/*/" ("" is stand for multiple folders like "z1", "z1sn", "z2cg",...).

However, if you want to do more with my code, there are some hints I need to mention: a) This is a research project, I wrote the code in a short time and an ugly way. It might be hard to understand and modify the code especially for the coordinate part. I am sorry but research is not to make practical software with high standards.

b) As for the data to create satellite images, I only gave some sample input for display research results. You can check "Seamless-Satellite-image-Synthesis/web_ui/sss_ui/static/runtime/images/" folder. Please note that there is a tricky thing: I provide the map image for viewing and the semantic label image that is actually used as the input of the neural network. For example, the "Seamless-Satellite-image-Synthesis/web_ui /sss_ui/static/runtime/images/sA/z2/00000.png" and the "Seamless-Satellite-image-Synthesis/web_ui/sss_ui/static/runtime/images/sAL/z2/00000.png". Images in "sA" folder is for better human eye observation, images in "sAL" is the semantic label map images for neural network input (looks like fully black, you need to see the semantic label after adjusting colour level range with professional image processing software like GIMP), and the "sAI" folder is the edge image of the corresponding semantic label image, which is required by the SPADE (https://github.com/NVlabs/SPADE). The semantic label image is a special image which use one colour for one specific object type, you can know more here: http://helper.ipam.ucla.edu/publications/gss2013/gss2013_11406.pdf. All of these mean that even if you run through my code, you will only be able to generate limited corresponding satellite images in the image folder.

c) If you want to generate a larger area or a theoretically infinite satellite image like in the paper, you need to prepare enough map data as input yourself. Due to copyright issues, I cannot provide these data. But you can get them from Digimap (https://digimap.edina.ac.uk/), which is free for most researchers. You need to download the vector type map data (https://digimap.edina.ac.uk/help/our-maps-and-data/os_products/#main -> OS MasterMap® -> Topography), and use the data to render the corresponding semantic label map images. Rendering these maps is also time-consuming.

d) Once you have your data, you can use the pre-trained models I provided directly, but my models were trained with the map and satellite images near Leeds, UK. This means that it can only generate Leeds-style satellite imagery. If you want to generate other style (other cities), you need to retrain the network with your own data (around 3000 pairs of images). This can take 2-6 days and requires a quite good GPU(8~12G).

e) After reading the above content, you may realise that this project is not so easy to get started for non-coder or non-Deep-Learning-researcher. I'm sorry because this is research not engineering, I don't have time to make an easy and perfect software that can be used by a single click. If you want to generate satellite images from maps, I suggest you take a look at the classic Deep Learning image-to-image translation system - pix2pix (https://phillipi.github.io/pix2pix/), and start from it to understand how Deep Learning completes this translation. It might be better to start with pix2pix then using my project directly.