Closed neversettle-tech closed 2 years ago
Limit the training images' length no longer than 2048 is better.
There is a note "No more than 2048" mentioned in paper
There is a note "No more than 2048" mentioned in paper 师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。
There is a note "No more than 2048" mentioned in paper 师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。
请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了
There is a note "No more than 2048" mentioned in paper 师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。
请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了 我的运行demo的步骤: 1.我下载了预训练模型 2.运行了python video_demo.py --pre model_best.pth --video_path demo.mp4然后运行的时候一直卡在这个地方了,希望师兄可以给个思路!
There is a note "No more than 2048" mentioned in paper 师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。
请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了 我的运行demo的步骤: 1.我下载了预训练模型 2.运行了python video_demo.py --pre model_best.pth --video_path demo.mp4然后运行的时候一直卡在这个地方了,希望师兄可以给个思路!
把seg_hrnet.py中下面部分注释掉看看 if train==True: if os.path.isfile(pretrained):
pretrained_dict = torch.load(pretrained)
logger.info('=> loading pretrained model {}'.format(pretrained))
model_dict = self.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items()
if k in model_dict.keys()}
model_dict.update(pretrained_dict)
self.load_state_dict(model_dict)
print("load ImageNet pre_trained parameters for HR_Net")
else:
print('please check HRNET ImageNet pretrained model, the path ' + pretrained + ' is wrong')
exit()
非常非常谢谢梁老师的回复!我的原因找到了,视为的cuda和torch版本不对应卡在这地方一直在加载模型
------------------ 原始邮件 ------------------ 发件人: "dk-liang/FIDTM" @.>; 发送时间: 2021年12月10日(星期五) 晚上10:33 @.>; @.**@.>; 主题: Re: [dk-liang/FIDTM] Training set image size (#23)
There is a note "No more than 2048" mentioned in paper 师兄你好,请问只要预训练模型,跑这个代码的demo,你跑出来了吗,我这里一直卡着不懂呜呜呜,苦恼死了。
请问报什么错呢,据我所知,还是有人跑成功并且用在自己的数据集上做项目了 我的运行demo的步骤: 1.我下载了预训练模型 2.运行了python video_demo.py --pre model_best.pth --video_path demo.mp4然后运行的时候一直卡在这个地方了,希望师兄可以给个思路!
把seg_hrnet.py中下面部分注释掉看看
if train==True:
if os.path.isfile(pretrained):
pretrained_dict = torch.load(pretrained) logger.info('=> loading pretrained model {}'.format(pretrained)) model_dict = self.state_dict() pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict.keys()} model_dict.update(pretrained_dict) self.load_state_dict(model_dict) print("load ImageNet pre_trained parameters for HR_Net") else: print('please check HRNET ImageNet pretrained model, the path ' + pretrained + ' is wrong') exit()
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
If I use my private data set for training, what are the requirements for image size? I saw that the fidt_generate_xx.py treated image sizes differently for different data sets