Open 19980715dyy opened 2 years ago
export CUDA_HOME=/usr/local/cuda-11.0 将setup.py中的“cmdclass={'build_ext': BuildExtension}”这一行改为“cmdclass={'build_ext': BuildExtension.with_options(use_ninja=False)}” export TORCH_CUDA_ARCH_LIST="7.5" sh make.sh
error in ms_deformable_col2im_cuda extra_compile_args["nvcc"] = [ "-DCUDA_HAS_FP16=1", "-DCUDA_NO_HALF_OPERATORS", "-DCUDA_NO_HALF_CONVERSIONS", "-D__CUDA_NO_HALF2_OPERATORS__", "-arch=sm_60", "-gencode=arch=compute_60,code=sm_60", "-gencode=arch=compute_61,code=sm_61", "-gencode=arch=compute_70,code=sm_70", "-gencode=arch=compute_75,code=sm_75", ] 修改完删除build文件,再重新编译.sh
expavg.mul(beta1).add_(grad, alpha=1 - beta1) RuntimeError: The size of tensor a (91) must match the size of tensor b (3) at non-singleton dimension 0
how to solved?
主要是优化器问题, """import copy p_groups = copy.deepcopy(optimizer.param_groups) optimizer.load_state_dict(checkpoint['optimizer']) for pg, pg_old in zip(optimizer.param_groups, p_groups): pg['lr'] = pg_old['lr'] pg['initial_lr'] = pg_old['initial_lr'] print(optimizer.param_groups)""" 把这些全注释掉,但只能调用model和lr模块参数
主要是优化器问题, """import copy p_groups = copy.deepcopy(optimizer.param_groups) optimizer.load_state_dict(checkpoint['optimizer']) for pg, pg_old in zip(optimizer.param_groups, p_groups): pg['lr'] = pg_old['lr'] pg['initial_lr'] = pg_old['initial_lr'] print(optimizer.param_groups)""" 把这些全注释掉,但只能调用model和lr模块参数
有没有治本的方法,比如修改权重文件
你可以试试,将优化器权重里面不符合你模型尺寸的删除掉
Hello all, I can't build it for torch 1.12.0 with cuda version 10.2 Your help is much appreciated!
torch和cuda版本不匹配吧
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年11月1日(星期二) 上午6:19 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [fundamentalvision/Deformable-DETR] make.sh version (Issue #170)
Hello all, I can't build it for torch 1.12.0 with cuda version 10.2 Your help is much appreciated!
— Reply to this email directly,view it on GitHub, orunsubscribe. You are receiving this because you commented.Message @.***与>
The detected CUDA version (9.1) mismatches the version that was used to compile PyTorch (11.6). Please make sure to use the same CUDA versions.