Open yangyingni opened 5 years ago
帅哥,有预训练模型吗?可以提供一个预训练模型嘛。
帅哥,有预训练模型吗?可以提供一个预训练模型嘛。
我现在使用的pretrain的model的gt 图像数目比较少,请问你有没有相关含有比较清晰parsing map的数据集可以参考?我现在采用的是CelebAHD+ 使用GAN生成的对应Parsing map的数据集,数据量级在15000左右,和原作者的差一些
我现在就有Hellen的人脸解析图>_<,刚开始做这个方向。老哥先来个预训练模型我跑一下。。。谢谢。邮箱549743857@qq.com
大哥,你复现了这么多超分辨的论文,我看你代码没有FSRGAN的训练代码,我最近也在做超分辨,可以加个qq一起探讨一下嘛。
Hi, please can someone provide me the pretrained model : sr_1_4_0model_epoch_160_iter_0.pth ? And alos, what about lr_no_noise.npy and 11_parsing_maps.npy and gts.npy and val_lr1.npy and val_gts1.npy ? How can I get them ? Thanks.
我现在就有Hellen的人脸解析图>_<,刚开始做这个方向。老哥先来个预训练模型我跑一下。。。谢谢。邮箱549743857@qq.com
Would you mind providing me the Helen dataset with face parsing map? I used CelebAHD for training, could you providing the BaiduNetdisk link for me to download? I will update the dataloader with helen dataset
Hi cydiachen Any news about updating your repo by the missing files please : lr_no_noise.npy, 11_parsing_maps.npy, gts.npy, val_lr1.npy and val_gts1.npy ? Can you explain to us how these files are generated please ? I'm confused about how to generate heatpmap ...? Thanks
我现在就有Hellen的人脸解析图>_<,刚开始做这个方向。老哥先来个预训练模型我跑一下。。。谢谢。邮箱549743857@qq.com
Can you please share with us parsing maps for Helen DataSet ? Thanks
您好,我在跑您提供的代码和数据集的时候,在dataloader 里加载解析图 循环那11张图的时候 ,出错了 这是为什么呢??在i等于8的时候跳出循环而不是 len(str)=11
@liushuangmax can you share the part of code that you are talking about ? Which file ? Networks.py ? models.py ...?
@liushuangmax can you share the part of code that you are talking about ? Which file ? Networks.py ? models.py ...?
warnings.warn(_use_error_msg)
Checkpoint saved to /home/img/Desktop/net/weights/_ParsingMaps_model_epoch_0_iter_0.pth
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
OpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229
Traceback (most recent call last):
File "/home/img/Desktop/net/train.py", line 88, in
File "/home/img/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/img/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in
File "/home/img/Desktop/net/data/dataloader.py", line 111, in getitem self.is_parsing_map)
File "/home/img/Desktop/net/data/dataloader.py", line 64, in load_lr_hr_prior hm_resized = cv2.resize(hm, (64, 64), interpolation=cv2.INTER_CUBIC) / 255.0 cv2.error: /opt/conda/conda-bld/opencv_1491943970124/work/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize
if is_parsing_map: str = ['skin.png','lbrow.png','rbrow.png','leye.png','reye.png','lear.png','rear.png','nose.png','mouth','ulip.png','llip.png']
hms = np.zeros((64, 64, len(str)))
for i in range(len(str)):
(onlyfilePath, img_name) = os.path.split(file_path)
full_name = onlyfilePath + "/Parsing_Maps/" + img_name[:-4] + "_"+ str[i]
hm = cv2.imread(full_name, cv2.IMREAD_GRAYSCALE)
hm_resized = cv2.resize(hm, (64, 64), interpolation=cv2.INTER_CUBIC) / 255.0
hms[:, :, i] = hm_resized
img = cv2.resize(img, (output_width, output_height), interpolation=cv2.INTER_CUBIC)
img_lr = cv2.resize(img, (int(output_width / scale), int(output_height / scale)), interpolation=cv2.INTER_CUBIC)
if is_scale_back:
img_lr = cv2.resize(img_lr, (output_width, output_height), interpolation=cv2.INTER_CUBIC)
return img_lr, img, hms
else:
return img_lr, img, hms
Did you created on folder nammed "Parsing_Maps" under ./data/CelebA-HQ-img/ ?
From me side, i modified the Dataloader as :
if is_parsing_map: str = ['skin.png','l_brow.png','r_brow.png','l_eye.png','r_eye.png','l_ear.png','r_ear.png','nose.png','mouth.png','u_lip.png','l_lip.png'] hms = np.zeros((64, 64, len(str))) for i in range(len(str)): (onlyfilePath, img_name) = os.path.split(file_path) _img_name = img_name[:-4] _img_name = '00000' + _img_name _lenght = len(_img_name) - 5 _img_name = _img_name[_lenght:] full_name = onlyfilePath + "/Parsing_Maps/" + _img_name + "_"+ str[i] if os.path.exists(full_name): hm = cv2.imread(full_name, cv2.IMREAD_GRAYSCALE) hm_resized = cv2.resize(hm, (64, 64), interpolation=cv2.INTER_CUBIC) / 255.0 hms[:, :, i] = hm_resized
Keep in touch :)
@liushuangmax : When you say : " First of all, thank you very much for your reply!! I successfully solved the dataloader problem ." Can you explain what was the problem and how did you solved it so we can all learn from it ? :)
For the question : "But I have a new problem,The "parsing map " of the ground truth is 11 dimensions, while the "feature map " obtained through " prior _network" training is 128 dimensions, and the dimensions are not unified when calculating loss?" I think that the HourGlass is implemented to extract the parsing maps and Landmarks at the same time, but the author is exploring only parsing maps (11 parsing maps by image). I hope that he can explaain it when he is free :)
For the imread i suggest to add some print of your path access to check if the generated path feed with your images or not (I had this issue, that why i changed this from :
if is_parsing_map: str = ['skin.png','lbrow.png','rbrow.png','leye.png','reye.png','lear.png','rear.png','nose.png','mouth','ulip.png','llip.png']
To :
if is_parsing_map: str = ['skin.png','l_brow.png','r_brow.png','l_eye.png','r_eye.png','l_ear.png','r_ear.png','nose.png','mouth.png','u_lip.png','l_lip.png']
Because my parsing maps images was nammed with some underscore (may be it's not the case for the athor and you !)
And changed from this :
full_name = onlyfilePath + "/Parsing_Maps/" + img_name[:-4] + "_"+ str[i]
To this :
`_img_name = img_name[:-4] _img_name = '00000' + _img_name _lenght = len(_img_name) - 5 _img_name = _img_name[_lenght:]
full_name = onlyfilePath + "/Parsing_Maps/" + _img_name + "_"+ str[i]`
Because my parsing maps images are of style : 00000_hair.png and 00000_l_brow.png ... But not 0_hair.png and 0_lbrow.png
For the test python code, I used the first version published by the author (I used .NPY Files instead of reading directly the images from folders because it doesn't work for me in Google Colab). Here is the Test code :
`import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim import torch.utils.data import torchvision.datasets as dset from torch.utils.data import DataLoader import torchvision.utils as vutils from torch.autograd import Variable import torchvision
from future import print_function import os
import argparse import os import random
from dataset import * import time import numpy as np
from networks import * from math import log10
import cv2 import skimage import scipy.io import glob import matplotlib.image as mpimg import matplotlib.pyplot as plt
from tensorboardX import SummaryWriter
parser = argparse.ArgumentParser() parser.add_argument('--test', default='True', action='store_true', help='enables test during training') parser.add_argument('--mse_avg', action='store_true', help='enables mse avg') parser.add_argument('--num_layers_res', type=int, help='number of the layers in residual block', default=2) parser.add_argument('--nrow', type=int, help='number of the rows to save images', default=1) parser.add_argument('--batchSize', type=int, default=64, help='input batch size') parser.add_argument('--test_batchSize', type=int, default=64, help='test batch size') parser.add_argument('--save_iter', type=int, default=10, help='the interval iterations for saving models') parser.add_argument('--test_iter', type=int, default=500, help='the interval iterations for testing') parser.add_argument('--cdim', type=int, default=3, help='the channel-size of the input image to network') parser.add_argument("--nEpochs", type=int, default=1000, help="number of epochs to train for") parser.add_argument("--start_epoch", default=0, type=int, help="Manual epoch number (useful on restarts)") parser.add_argument('--lr', type=float, default=0.7 2.5 10 ** (-4), help='learning rate, default=0.0002') parser.add_argument('--cuda', default='False', action='store_true', help='enables cuda') parser.add_argument('--ngpu', type=int, default=1, help='number of GPUs to use') parser.add_argument('--outf', default='./results/1_4/', help='folder to output images') parser.add_argument('--manualSeed', type=int, help='manual seed') parser.add_argument("--pretrained",default="./model/sr_1_4_0model_epoch_107_iter_0.pth", type=str, help="path to pretrained model (default: none)")
def main(): global opt, model opt = parser.parse_args('--num_layers_res 2'.split()) print(opt)
try:
os.makedirs(opt.outf)
except OSError:
pass
if opt.manualSeed is None:
opt.manualSeed = random.randint(1, 10000)
print("Random Seed: ", opt.manualSeed)
random.seed(opt.manualSeed)
torch.manual_seed(opt.manualSeed)
if opt.cuda is True:
torch.cuda.manual_seed_all(opt.manualSeed)
cudnn.benchmark = True
criterion_l1 = nn.L1Loss(size_average=True)
criterion_MSE = nn.MSELoss(size_average=True)
#if torch.cuda.is_available() and not opt.cuda:
# print("WARNING: You have a CUDA device, so you should probably run with --cuda")
ngpu = int(opt.ngpu)
# --------------build models--------------------------
with torch.no_grad():
srnet = NetSR(num_layers_res=opt.num_layers_res)
if opt.cuda is True:
srnet = srnet.cuda()
criterion_l1 = criterion_l1.cuda()
criterion_MSE = criterion_MSE.cuda()
if opt.pretrained:
if os.path.isfile(opt.pretrained):
print("=> loading model '{}'".format(opt.pretrained))
weights = torch.load(opt.pretrained, map_location='cpu')
# debug
print(weights)
pretrained_dict = weights['model'].state_dict()
model_dict = srnet.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
# 3. load the new state dict
srnet.load_state_dict(model_dict)
# srnet.load_state_dict(weights['model'].state_dict())
else:
print("=> no model found at '{}'".format(opt.pretrained))
print("Display Network Strcuture:")
print(srnet)
size = 128
batch = 14
save_freq = 2
result_dir = './results/'
result_dir0 = './results/1_4/'
# using tensorboardX to visualize our loss function
writer = SummaryWriter('./log')
val_face0 = np.load('./data/NPYFiles/test_V3.npy')
# srnet.train()
avg_psnr = 0.0
LENGTH = val_face0.shape[0] // batch
for titer in range(LENGTH):
input0 = val_face0[titer * batch:(titer + 1) * batch, :, :, :]
input0 = torch.from_numpy(np.float32(input0)).permute(0, 3, 1, 2)
if opt.cuda is True:
input0 = input0.cuda()
try:
with torch.no_grad():
output0, parsing_maps, output = srnet(input0)
except RuntimeError as exception:
if "out of memory" in str(exception):
print("Warning: out of memory")
#if hasattr(torch.cuda,'empty_cache'):
# torch.cuda.empty_cache()
else:
raise exception
output11 = output.permute(0, 2, 3, 1).cpu().data.numpy()
for n in range(batch):
output01 = output11[n, :, :, :]
scipy.misc.toimage(output01, high=255, low=0, cmin=0, cmax=255).save(
result_dir0 + 'lr_%d_%d.jpg' % (titer, n))
if name == "main": main()`
@liushuangmax Yes keep in touch.
NB: The first version was implemnting the CONV 1*1 to extract 11 parsing maps (as you did in this new Version) !
@JauB1981 Please format your code. It is unreadable.
Thank you for your sharing.Can you provide the pretrained file kindly?sr_1_4_0model_epoch_160_iter_0.pth,thank you again.