Open happsky opened 5 years ago
@kregmi Models of X-fork and X-seq work for me, however I have encountered the following error when I use the pretrained pix2pix model,
checkpoints_dir ./checkpoints
/home/csdept/torch/install/bin/luajit: /home/csdept/torch/install/share/lua/5.1/torch/File.lua:272: read error: read 5499 blocks instead of 1484150018 at /home/csdept/torch/pkg/torch/lib/TH/THDiskFile.c:344 stack traceback: [C]: in function 'readChar' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:272: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:368: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:353: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:353: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' ...e/csdept/torch/install/share/lua/5.1/nngraph/gmodule.lua:495: in function 'read' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load' ...dept/projects/cross_view_image_synthesis_2/util/util.lua:282: in function 'load' test_pix2pix.lua:82: in main chunk [C]: in function 'dofile' ...dept/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50
Commands I used,
DATA_ROOT=/home/csdept/projects/crossnet/CVUSA/cvusa name=X-Fork-cvusa which_direction=g2a batchSize=4 phase=test which_epoch=35 th test_fork.lua;
DATA_ROOT=/home/csdept/projects/crossnet/CVUSA/cvusa name=x_pix2pix_cvusa which_direction=g2a batchSize=4 phase=test which_epoch=35 th test_pix2pix.lua;
Seems like the model was not uploaded correctly. I have uploaded the models again here. Also, I can see in your commands that you tried to synthesize aerial images (which_direction=g2a) but the experiments on CVUSA datasets were performed on a2g direction only and the shared models work to synthesize streetview images.
checkpoints_dir ./checkpoints
/home/csdept/torch/install/bin/luajit: /home/csdept/torch/install/share/lua/5.1/torch/File.lua:375: unknown object stack traceback: [C]: in function 'error' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:375: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:368: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:353: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject' ...e/csdept/torch/install/share/lua/5.1/nngraph/gmodule.lua:495: in function 'read' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject' /home/csdept/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load' ...dept/projects/cross_view_image_synthesis_2/util/util.lua:282: in function 'load' test_pix2pix.lua:82: in main chunk [C]: in function 'dofile' ...dept/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50
Still not work, any ideas? For pix2pix, which image format should I input, 4 images or just 2 images? Thank you for remaining me the problem, I have a different image order so this would not be a problem.
Not sure what was the problem. I re-uploaded the model, then downloaded and tried in my machine. Its working for me now. Hope it will work for you as well. Let me know!
My bad, it works like a charm now. Thanks for your help.
From the command line, you can do> th and hit enter to get into torch environment. There you can load required packages by doing: require 'nngraph' and require 'cudnn' After this you can load the model as: model = torch.load('35_net_G.t7')
Also about the image format, you should input 4 images, same as x-fork and x-seq; the dataloader is written to expect 4 images concatenated.
It works now, can you share the pretrained X-Seq model for segmentation generation on this dataset?
Unfortunately I could not locate the X-seq model for segmentation map generation on CVUSA dataset. I can train the network and share the model with you, this will take about couple of days. Alternatively, you can train it yourself.
Ok, thank you. Since there is no segmentation map for the aerial image on this dataset, what did you do for data preparation {streetview image, aerial image, segmentation map for streetview image, segmentation map for aerial image}?
You can use any image at the position of segmentation map for aerial image, lets say use segmentation map of streetview image. This is a quick fix and a good option. eg. {streetview image, aerial image, segmentation map for streetview image, segmentation map for streetview image}. This will not affect our training because we are training a2g direction only.
Hi @kregmi, the pretrained models for X-Seq and Pix2pix have the same size on this dataset, i.e., 535.1M, and the results produced by both models are quite similar. Can you check it? Thank you so much.
@kregmi can you help me?
Can you share pretrained model on CVUSA dataset?