duanyuqi987 / YAJUNblog

用Issue写博客,记录学习内容,方便记录.
1 stars 0 forks source link

在yolov3训练和测试中出现的所有问题集合 #17

Open duanyuqi987 opened 4 years ago

duanyuqi987 commented 4 years ago

1.torch.load invalid load key, '\x00'??? MaskDetection_yolo3_pytorch项目中导入参数的时候出现的错误

答: torch加载yolov3.weights,报错:

torch.load invalid load key, '\x00'

`cfg='cfg/yolov3.cfg' model = Darknet(cfg, img_size)

# Load weights
if weights.endswith('.pt'):  # pytorch format
    model.load_state_dict(torch.load(weights, map_location=device)['model'])
else:  # darknet format
    _ = load_darknet_weights(model, weights)`

加载权重,保存权重,转换权重 `def load_darknet_weights(self, weights, cutoff=-1):

Parses and loads the weights stored in 'weights'

# cutoff: save layers between 0 and cutoff (if cutoff = -1 all are saved)
weights_file = weights.split(os.sep)[-1]

# Try to download weights if not available locally
if not os.path.isfile(weights):
    try:
        os.system('wget https://pjreddie.com/media/files/' + weights_file + ' -O ' + weights)
    except IOError:
        print(weights + ' not found.\nTry https://drive.google.com/drive/folders/1uxgUBemJVw9wZsdpboYbzUN4bcRhsuAI')

# Establish cutoffs
if weights_file == 'darknet53.conv.74':
    cutoff = 75
elif weights_file == 'yolov3-tiny.conv.15':
    cutoff = 15

# Read weights file
with open(weights, 'rb') as f:
    # Read Header https://github.com/AlexeyAB/darknet/issues/2914#issuecomment-496675346
    self.version = np.fromfile(f, dtype=np.int32, count=3)  # (int32) version info: major, minor, revision
    self.seen = np.fromfile(f, dtype=np.int64, count=1)  # (int64) number of images seen during training

    weights = np.fromfile(f, dtype=np.float32)  # The rest are weights

ptr = 0
for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
    if module_def['type'] == 'convolutional':
        conv_layer = module[0]
        if module_def['batch_normalize']:
            # Load BN bias, weights, running mean and running variance
            bn_layer = module[1]
            num_b = bn_layer.bias.numel()  # Number of biases
            # Bias
            bn_b = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.bias)
            bn_layer.bias.data.copy_(bn_b)
            ptr += num_b
            # Weight
            bn_w = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.weight)
            bn_layer.weight.data.copy_(bn_w)
            ptr += num_b
            # Running Mean
            bn_rm = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.running_mean)
            bn_layer.running_mean.data.copy_(bn_rm)
            ptr += num_b
            # Running Var
            bn_rv = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.running_var)
            bn_layer.running_var.data.copy_(bn_rv)
            ptr += num_b
        else:
            # Load conv. bias
            num_b = conv_layer.bias.numel()
            conv_b = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(conv_layer.bias)
            conv_layer.bias.data.copy_(conv_b)
            ptr += num_b
        # Load conv. weights
        num_w = conv_layer.weight.numel()
        conv_w = torch.from_numpy(weights[ptr:ptr + num_w]).view_as(conv_layer.weight)
        conv_layer.weight.data.copy_(conv_w)
        ptr += num_w

return cutoff

def save_weights(self, path='model.weights', cutoff=-1):

Converts a PyTorch model to Darket format (.pt to .weights)

# Note: Does not work if model.fuse() is applied
with open(path, 'wb') as f:
    # Write Header https://github.com/AlexeyAB/darknet/issues/2914#issuecomment-496675346
    self.version.tofile(f)  # (int32) version info: major, minor, revision
    self.seen.tofile(f)  # (int64) number of images seen during training

    # Iterate through layers
    for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
        if module_def['type'] == 'convolutional':
            conv_layer = module[0]
            # If batch norm, load bn first
            if module_def['batch_normalize']:
                bn_layer = module[1]
                bn_layer.bias.data.cpu().numpy().tofile(f)
                bn_layer.weight.data.cpu().numpy().tofile(f)
                bn_layer.running_mean.data.cpu().numpy().tofile(f)
                bn_layer.running_var.data.cpu().numpy().tofile(f)
            # Load conv bias
            else:
                conv_layer.bias.data.cpu().numpy().tofile(f)
            # Load conv weights
            conv_layer.weight.data.cpu().numpy().tofile(f)

def convert(cfg='cfg/yolov3-spp.cfg', weights='weights/yolov3-spp.weights'):

Converts between PyTorch and Darknet format per extension (i.e. .weights convert to .pt and vice versa)

# from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.weights')

# Initialize model
model = Darknet(cfg)

# Load weights and save
if weights.endswith('.pt'):  # if PyTorch format
    model.load_state_dict(torch.load(weights, map_location='cpu')['model'])
    save_weights(model, path='converted.weights', cutoff=-1)
    print("Success: converted '%s' to 'converted.weights'" % weights)

elif weights.endswith('.weights'):  # darknet format
    _ = load_darknet_weights(model, weights)
    chkpt = {'epoch': -1, 'best_loss': None, 'model': model.state_dict(), 'optimizer': None}
    torch.save(chkpt, 'converted.pt')
    print("Success: converted '%s' to 'converted.pt'" % weights)

else:
    print('Error: extension not supported.')`
duanyuqi987 commented 4 years ago

torch RuntimeError: Error(s) in loading state_dict for CRNN:???

加载训练的模型报如下错误。

model.load_state_dict(torch.load(model_path)) Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for CRNN: Missing key(s) in state_dict: "cnn.conv0.bias", "cnn.conv0.weight", "cnn.conv1.bias", "cnn.conv1.weight", "cnn.conv2.bias", "cnn.conv2.weight", "cnn.batchnorm2.running_var", "cnn.batchnorm2.bias", "cnn.batchnorm2.weight", "cnn.batchnorm2.running_mean", "cnn.conv3.bias", "cnn.conv3.weight", "cnn.conv4.bias", "cnn.conv4.weight", "cnn.batchnorm4.running_var", "cnn.batchnorm4.bias", "cnn.batchnorm4.weight", "cnn.batchnorm4.running_mean", "cnn.conv5.bias", "cnn.conv5.weight", "cnn.conv6.bias", "cnn.conv6.weight", "cnn.batchnorm6.running_var", "cnn.batchnorm6.bias", "cnn.batchnorm6.weight", "cnn.batchnorm6.running_mean", "rnn.0.rnn.bias_ih_l0_reverse", "rnn.0.rnn.weight_hh_l0_reverse", "rnn.0.rnn.bias_ih_l0", "rnn.0.rnn.bias_hh_l0", "rnn.0.rnn.weight_ih_l0_reverse", "rnn.0.rnn.weight_ih_l0", "rnn.0.rnn.bias_hh_l0_reverse", "rnn.0.rnn.weight_hh_l0", "rnn.0.embedding.bias", "rnn.0.embedding.weight", "rnn.1.rnn.bias_ih_l0_reverse", "rnn.1.rnn.weight_hh_l0_reverse", "rnn.1.rnn.bias_ih_l0", "rnn.1.rnn.bias_hh_l0", "rnn.1.rnn.weight_ih_l0_reverse", "rnn.1.rnn.weight_ih_l0", "rnn.1.rnn.bias_hh_l0_reverse", "rnn.1.rnn.weight_hh_l0", "rnn.1.embedding.bias", "rnn.1.embedding.weight". Unexpected key(s) in state_dict: "module.cnn.conv0.weight", "module.cnn.conv0.bias", "module.cnn.conv1.weight", "module.cnn.conv1.bias", "module.cnn.conv2.weight", "module.cnn.conv2.bias", "module.cnn.batchnorm2.weight", "module.cnn.batchnorm2.bias", "module.cnn.batchnorm2.running_mean", "module.cnn.batchnorm2.running_var", "module.cnn.batchnorm2.num_batches_tracked", "module.cnn.conv3.weight", "module.cnn.conv3.bias", "module.cnn.conv4.weight", "module.cnn.conv4.bias", "module.cnn.batchnorm4.weight", "module.cnn.batchnorm4.bias", "module.cnn.batchnorm4.running_mean", "module.cnn.batchnorm4.running_var", "module.cnn.batchnorm4.num_batches_tracked", "module.cnn.conv5.weight", "module.cnn.conv5.bias", "module.cnn.conv6.weight", "module.cnn.conv6.bias", "module.cnn.batchnorm6.weight", "module.cnn.batchnorm6.bias", "module.cnn.batchnorm6.running_mean", "module.cnn.batchnorm6.running_var", "module.cnn.batchnorm6.num_batches_tracked", "module.rnn.0.rnn.weight_ih_l0", "module.rnn.0.rnn.weight_hh_l0", "module.rnn.0.rnn.bias_ih_l0", "module.rnn.0.rnn.bias_hh_l0", "module.rnn.0.rnn.weight_ih_l0_reverse", "module.rnn.0.rnn.weight_hh_l0_reverse", "module.rnn.0.rnn.bias_ih_l0_reverse", "module.rnn.0.rnn.bias_hh_l0_reverse", "module.rnn.0.embedding.weight", "module.rnn.0.embedding.bias", "module.rnn.1.rnn.weight_ih_l0", "module.rnn.1.rnn.weight_hh_l0", "module.rnn.1.rnn.bias_ih_l0", "module.rnn.1.rnn.bias_hh_l0", "module.rnn.1.rnn.weight_ih_l0_reverse", "module.rnn.1.rnn.weight_hh_l0_reverse", "module.rnn.1.rnn.bias_ih_l0_reverse", "module.rnn.1.rnn.bias_hh_l0_reverse", "module.rnn.1.embedding.weight", "module.rnn.1.embedding.bias".

什么鬼,模型字段名称怎么多了个module的前缀,查看,修改名字即可。

dir(torch.load(model_path)) ['_OrderedDictmap', '_OrderedDict__marker', '_OrderedDictroot', '_OrderedDictupdate', 'class', 'cmp', 'contains', 'delattr', 'delitem', 'dict', 'doc', 'eq', 'format', 'ge', 'getattribute', 'getitem', 'gt', 'hash', 'init', 'iter', 'le', 'len', 'lt', 'module', 'ne', 'new', 'reduce', 'reduce_ex', 'repr', 'reversed', 'setattr', 'setitem', 'sizeof', 'str', 'subclasshook', 'weakref__', 'clear', 'copy', 'fromkeys', 'get', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values', 'viewitems', 'viewkeys', 'viewvalues']

model.load_state_dict({k.replace('module.',''):v for k,v in torch.load(model_path).items()}) IncompatibleKeys(missing_keys=[], unexpected_keys=[])

参考文献:

Pytorch:Unexpected key(s) in state_dict:
duanyuqi987 commented 4 years ago

【pytorch载入模型报错解决】nexpected key(s) in state_dict: "epoch", "arch", "state_dict","????

es/module.py", line 719, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DataParallel: Missing key(s) in state_dict: "module.features.0.0.weight", "module.features.0.1.weight", "module.features.0.1.bias", "module.features.0.1.running_mean", "module.features.0.1.running_var", "module.features.1.conv.0.weight", "module.features.1.conv.1.weight", "module.features.1.conv.1.bias", "module.features.1.conv.1.running_mean", "module.features.1.conv.1.running_var", "module.features.1.conv.3.weight", "module.features.1.conv.4.weight", "module.features.1.conv.4.bias", "module.features.1.conv.4.running_mean", "module.features.1.conv.4.running_var", "module.features.2.conv.0.weight", "module.features.2.conv.1.weight", "module.features.2.conv.1.bias", "module.features.2.conv.1.running_mean", "module.features.2.conv.1.running_var", "module.features.2.conv.3.weight", "module.features.2.conv.4.weight", "module.features.2.conv.4.bias", "module.features.2.conv.4.running_mean", "module.features.2.conv.4.running_var", "module.features.2.conv.6.weight", "module.features.2.conv.7.weight", "module.features.2.conv.7.bias", "module.features.2.conv.7.running_mean", "module.features.2.conv.7.running_var", "module.features.3.conv.0.weight", "module.features.3.conv.1.weight", "module.features.3.conv.1.bias", "module.features.3.conv.1.running_mean", "module.features.3.conv.1.running_var", "module.features.3.conv.3.weight", "module.features.3.conv.4.weight", "module.features.3.conv.4.bias", "module.features.3.conv.4.running_mean", "module.features.3.conv.4.running_var", "module.features.3.conv.6.weight", "module.features.3.conv.7.weight", "module.features.3.conv.7.bias", "module.features.3.conv.7.running_mean", "module.features.3.conv.7.running_var", "module.features.4.conv.0.weight", "module.features.4.conv.1.weight", "module.features.4.conv.1.bias", "module.features.4.conv.1.running_mean", "module.features.4.conv.1.running_var", "module.features.4.conv.3.weight", "module.features.4.conv.4.weight", "module.features.4.conv.4.bias", "module.features.4.conv.4.running_mean", "module.features.4.conv.4.running_var", "module.features.4.conv.6.weight", "module.features.4.conv.7.weight", "module.features.4.conv.7.bias", "module.features.4.conv.7.running_mean", "module.features.4.conv.7.running_var", "module.features.5.conv.0.weight", "module.features.5.conv.1.weight", "module.features.5.conv.1.bias", "module.features.5.conv.1.running_mean", "module.features.5.conv.1.running_var", "module.features.5.conv.3.weight", "module.features.5.conv.4.weight", "module.features.5.conv.4.bias", "module.features.5.conv.4.running_mean", "module.features.5.conv.4.running_var", "module.features.5.conv.6.weight", "module.features.5.conv.7.weight", "module.features.5.conv.7.bias", "module.features.5.conv.7.running_mean", "module.features.5.conv.7.running_var", "module.features.6.conv.0.weight", "module.features.6.conv.1.weight", "module.features.6.conv.1.bias", "module.features.6.conv.1.running_mean", "module.features.6.conv.1.running_var", "module.features.6.conv.3.weight", "module.features.6.conv.4.weight", "module.features.6.conv.4.bias", "module.features.6.conv.4.running_mean", "module.features.6.conv.4.running_var", "module.features.6.conv.6.weight", "module.features.6.conv.7.weight", "module.features.6.conv.7.bias", "module.features.6.conv.7.running_mean", "module.features.6.conv.7.running_var", "module.features.7.conv.0.weight", "module.features.7.conv.1.weight", "module.features.7.conv.1.bias", "module.features.7.conv.1.running_mean", "module.features.7.conv.1.running_var", "module.features.7.conv.3.weight", "module.features.7.conv.4.weight", "module.features.7.conv.4.bias", "module.features.7.conv.4.running_mean", "module.features.7.conv.4.running_var", "module.features.7.conv.6.weight", "module.features.7.conv.7.weight", "module.features.7.conv.7.bias", "module.features.7.conv.7.running_mean", "module.features.7.conv.7.running_var", "module.features.8.conv.0.weight", "module.features.8.conv.1.weight", "module.features.8.conv.1.bias", "module.features.8.conv.1.running_mean", "module.features.8.conv.1.running_var", "module.features.8.conv.3.weight", "module.features.8.conv.4.weight", "module.features.8.conv.4.bias", "module.features.8.conv.4.running_mean", "module.features.8.conv.4.running_var", "module.features.8.conv.6.weight", "module.features.8.conv.7.weight", "module.features.8.conv.7.bias", "module.features.8.conv.7.running_mean", "module.features.8.conv.7.running_var", "module.features.9.conv.0.weight", "module.features.9.conv.1.weight", "module.features.9.conv.1.bias", "module.features.9.conv.1.running_mean", "module.features.9.conv.1.running_var", "module.features.9.conv.3.weight", "module.features.9.conv.4.weight", "module.features.9.conv.4.bias", "module.features.9.conv.4.running_mean", "module.features.9.conv.4.running_var", "module.features.9.conv.6.weight", "module.features.9.conv.7.weight", "module.features.9.conv.7.bias", "module.features.9.conv.7.running_mean", "module.features.9.conv.7.running_var", "module.features.10.conv.0.weight", "module.features.10.conv.1.weight", "module.features.10.conv.1.bias", "module.features.10.conv.1.running_mean", "module.features.10.conv.1.running_var", "module.features.10.conv.3.weight", "module.features.10.conv.4.weight", "module.features.10.conv.4.bias", "module.features.10.conv.4.running_mean", "module.features.10.conv.4.running_var", "module.features.10.conv.6.weight", "module.features.10.conv.7.weight", "module.features.10.conv.7.bias", "module.features.10.conv.7.running_mean", "module.features.10.conv.7.running_var", "module.features.11.conv.0.weight", "module.features.11.conv.1.weight", "module.features.11.conv.1.bias", "module.features.11.conv.1.running_mean", "module.features.11.conv.1.running_var", "module.features.11.conv.3.weight", "module.features.11.conv.4.weight", "module.features.11.conv.4.bias", "module.features.11.conv.4.running_mean", "module.features.11.conv.4.running_var", "module.features.11.conv.6.weight", "module.features.11.conv.7.weight", "module.features.11.conv.7.bias", "module.features.11.conv.7.running_mean", "module.features.11.conv.7.running_var", "module.features.12.conv.0.weight", "module.features.12.conv.1.weight", "module.features.12.conv.1.bias", "module.features.12.conv.1.running_mean", "module.features.12.conv.1.running_var", "module.features.12.conv.3.weight", "module.features.12.conv.4.weight", "module.features.12.conv.4.bias", "module.features.12.conv.4.running_mean", "module.features.12.conv.4.running_var", "module.features.12.conv.6.weight", "module.features.12.conv.7.weight", "module.features.12.conv.7.bias", "module.features.12.conv.7.running_mean", "module.features.12.conv.7.running_var", "module.features.13.conv.0.weight", "module.features.13.conv.1.weight", "module.features.13.conv.1.bias", "module.features.13.conv.1.running_mean", "module.features.13.conv.1.running_var", "module.features.13.conv.3.weight", "module.features.13.conv.4.weight", "module.features.13.conv.4.bias", "module.features.13.conv.4.running_mean", "module.features.13.conv.4.running_var", "module.features.13.conv.6.weight", "module.features.13.conv.7.weight", "module.features.13.conv.7.bias", "module.features.13.conv.7.running_mean", "module.features.13.conv.7.running_var", "module.features.14.conv.0.weight", "module.features.14.conv.1.weight", "module.features.14.conv.1.bias", "module.features.14.conv.1.running_mean", "module.features.14.conv.1.running_var", "module.features.14.conv.3.weight", "module.features.14.conv.4.weight", "module.features.14.conv.4.bias", "module.features.14.conv.4.running_mean", "module.features.14.conv.4.running_var", "module.features.14.conv.6.weight", "module.features.14.conv.7.weight", "module.features.14.conv.7.bias", "module.features.14.conv.7.running_mean", "module.features.14.conv.7.running_var", "module.features.15.conv.0.weight", "module.features.15.conv.1.weight", "module.features.15.conv.1.bias", "module.features.15.conv.1.running_mean", "module.features.15.conv.1.running_var", "module.features.15.conv.3.weight", "module.features.15.conv.4.weight", "module.features.15.conv.4.bias", "module.features.15.conv.4.running_mean", "module.features.15.conv.4.running_var", "module.features.15.conv.6.weight", "module.features.15.conv.7.weight", "module.features.15.conv.7.bias", "module.features.15.conv.7.running_mean", "module.features.15.conv.7.running_var", "module.features.16.conv.0.weight", "module.features.16.conv.1.weight", "module.features.16.conv.1.bias", "module.features.16.conv.1.running_mean", "module.features.16.conv.1.running_var", "module.features.16.conv.3.weight", "module.features.16.conv.4.weight", "module.features.16.conv.4.bias", "module.features.16.conv.4.running_mean", "module.features.16.conv.4.running_var", "module.features.16.conv.6.weight", "module.features.16.conv.7.weight", "module.features.16.conv.7.bias", "module.features.16.conv.7.running_mean", "module.features.16.conv.7.running_var", "module.features.17.conv.0.weight", "module.features.17.conv.1.weight", "module.features.17.conv.1.bias", "module.features.17.conv.1.running_mean", "module.features.17.conv.1.running_var", "module.features.17.conv.3.weight", "module.features.17.conv.4.weight", "module.features.17.conv.4.bias", "module.features.17.conv.4.running_mean", "module.features.17.conv.4.running_var", "module.features.17.conv.6.weight", "module.features.17.conv.7.weight", "module.features.17.conv.7.bias", "module.features.17.conv.7.running_mean", "module.features.17.conv.7.running_var", "module.features.18.0.weight", "module.features.18.1.weight", "module.features.18.1.bias", "module.features.18.1.running_mean", "module.features.18.1.running_var", "module.classifier.1.weight", "module.classifier.1.bias". Unexpected key(s) in state_dict: "epoch", "arch", "state_dict", "best_prec1", "optimizer".

因为最近要用到pytroch版本的mobilenet2,训练好数据之后,使用torch.load导入训练好的模型一直报错:

self.__class__.__name__, "\n\t".join(error_msgs)))

RuntimeError: Error(s) in loading state_dict for DataParallel:

或者Unexpected key(s) in state_dict: "epoch", "arch", "state_dict", "best_prec1", "optimizer".

这里我从网上找到的三个解决方案:我自己试了故障依旧,而且感觉会拖慢模型的速度。

第一种方案是因为你加入了模型前使用了torch.nn.DataParallel(),而此时的训练并没有使用,则会出现这样的错误。可以在你导入模型前加入这样一行代码:

model = torch.nn.DataParallel(model) cudnn.benchmark = True

第二种方案是

# original saved file with DataParallel
state_dict = torch.load('myfile.pth')
# create new OrderedDict that does not contain `module.`
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
    name = k[7:] # remove `module.`
    new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)

第三种方案为:

model.load_state_dict({k.replace('module.',''):v for k,v in torch.load('myfile.pth').items()})

以上的方案来自https://discuss.pytorch.org/t/solved-keyerror-unexpected-key-module-encoder-embedding-weight-in-state-dict/1686/3和https://blog.csdn.net/kaixinjiuxing666/article/details/85115077

但都试过之后无法解决我的问题。我最终是这样解决的!!!

第四种方案:我的方案

model = MobileNetV2()
checkpoint = torch.load(modelpath)  #modelpath是你要加载训练好的模型文件地址
model.load_state_dict(checkpoint['state_dict'])
output = model(x)

因为你训练好的模型文件好像字典键值有很多个,包括epoch等,但我们只需要模型参数文件。报错的原因是因为载入模型文件的键值太多了。pytorch识别不了。

成功解决!!!!!!!

原文链接:https://blog.csdn.net/ycc2011/article/details/89421767

duanyuqi987 commented 4 years ago

问题:::torch.load invalid load key, '\x00'"??????

torch加载yolov3.weights,报错:

torch.load invalid load key, '\x00'

cfg='cfg/yolov3.cfg' model = Darknet(cfg, img_size)

# Load weights
if weights.endswith('.pt'):  # pytorch format
    model.load_state_dict(torch.load(weights, map_location=device)['model'])
else:  # darknet format
    _ = load_darknet_weights(model, weights)

加载权重,保存权重,转换权重

def load_darknet_weights(self, weights, cutoff=-1):

Parses and loads the weights stored in 'weights'

# cutoff: save layers between 0 and cutoff (if cutoff = -1 all are saved)
weights_file = weights.split(os.sep)[-1]

# Try to download weights if not available locally
if not os.path.isfile(weights):
    try:
        os.system('wget https://pjreddie.com/media/files/' + weights_file + ' -O ' + weights)
    except IOError:
        print(weights + ' not found.\nTry https://drive.google.com/drive/folders/1uxgUBemJVw9wZsdpboYbzUN4bcRhsuAI')

# Establish cutoffs
if weights_file == 'darknet53.conv.74':
    cutoff = 75
elif weights_file == 'yolov3-tiny.conv.15':
    cutoff = 15

# Read weights file
with open(weights, 'rb') as f:
    # Read Header https://github.com/AlexeyAB/darknet/issues/2914#issuecomment-496675346
    self.version = np.fromfile(f, dtype=np.int32, count=3)  # (int32) version info: major, minor, revision
    self.seen = np.fromfile(f, dtype=np.int64, count=1)  # (int64) number of images seen during training

    weights = np.fromfile(f, dtype=np.float32)  # The rest are weights

ptr = 0
for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
    if module_def['type'] == 'convolutional':
        conv_layer = module[0]
        if module_def['batch_normalize']:
            # Load BN bias, weights, running mean and running variance
            bn_layer = module[1]
            num_b = bn_layer.bias.numel()  # Number of biases
            # Bias
            bn_b = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.bias)
            bn_layer.bias.data.copy_(bn_b)
            ptr += num_b
            # Weight
            bn_w = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.weight)
            bn_layer.weight.data.copy_(bn_w)
            ptr += num_b
            # Running Mean
            bn_rm = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.running_mean)
            bn_layer.running_mean.data.copy_(bn_rm)
            ptr += num_b
            # Running Var
            bn_rv = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(bn_layer.running_var)
            bn_layer.running_var.data.copy_(bn_rv)
            ptr += num_b
        else:
            # Load conv. bias
            num_b = conv_layer.bias.numel()
            conv_b = torch.from_numpy(weights[ptr:ptr + num_b]).view_as(conv_layer.bias)
            conv_layer.bias.data.copy_(conv_b)
            ptr += num_b
        # Load conv. weights
        num_w = conv_layer.weight.numel()
        conv_w = torch.from_numpy(weights[ptr:ptr + num_w]).view_as(conv_layer.weight)
        conv_layer.weight.data.copy_(conv_w)
        ptr += num_w

return cutoff

def save_weights(self, path='model.weights', cutoff=-1):

Converts a PyTorch model to Darket format (.pt to .weights)

# Note: Does not work if model.fuse() is applied
with open(path, 'wb') as f:
    # Write Header https://github.com/AlexeyAB/darknet/issues/2914#issuecomment-496675346
    self.version.tofile(f)  # (int32) version info: major, minor, revision
    self.seen.tofile(f)  # (int64) number of images seen during training

    # Iterate through layers
    for i, (module_def, module) in enumerate(zip(self.module_defs[:cutoff], self.module_list[:cutoff])):
        if module_def['type'] == 'convolutional':
            conv_layer = module[0]
            # If batch norm, load bn first
            if module_def['batch_normalize']:
                bn_layer = module[1]
                bn_layer.bias.data.cpu().numpy().tofile(f)
                bn_layer.weight.data.cpu().numpy().tofile(f)
                bn_layer.running_mean.data.cpu().numpy().tofile(f)
                bn_layer.running_var.data.cpu().numpy().tofile(f)
            # Load conv bias
            else:
                conv_layer.bias.data.cpu().numpy().tofile(f)
            # Load conv weights
            conv_layer.weight.data.cpu().numpy().tofile(f)

def convert(cfg='cfg/yolov3-spp.cfg', weights='weights/yolov3-spp.weights'):

Converts between PyTorch and Darknet format per extension (i.e. .weights convert to .pt and vice versa)

# from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.weights')

# Initialize model
model = Darknet(cfg)

# Load weights and save
if weights.endswith('.pt'):  # if PyTorch format
    model.load_state_dict(torch.load(weights, map_location='cpu')['model'])
    save_weights(model, path='converted.weights', cutoff=-1)
    print("Success: converted '%s' to 'converted.weights'" % weights)

elif weights.endswith('.weights'):  # darknet format
    _ = load_darknet_weights(model, weights)
    chkpt = {'epoch': -1, 'best_loss': None, 'model': model.state_dict(), 'optimizer': None}
    torch.save(chkpt, 'converted.pt')
    print("Success: converted '%s' to 'converted.pt'" % weights)

else:
    print('Error: extension not supported.')
duanyuqi987 commented 4 years ago

.pt 文件和 .weights 文件之间的转换??

pt是pytroch框架的,weights是Darknet框架的

from darknet import * //pt转weights试过,确实可行 if weights.endswith('.pt'): model = Darknet("cfgfile.cfg") model.load_weights("best.pt") model.saveweights(savedfile='converted.weights',cutoff=-1) //下面的weights转pt我没试过,不知可不可行。 elif weights.endswith('.weights'):
= load_darknet_weights(model, weights) chkpt = {'epoch': -1, 'best_loss': None, 'model': model.state_dict(), 'optimizer': None} torch.save(chkpt, 'converted.pt') print("Success: converted '%s' to 'converted.pt'" % weights)

这几个函数一般在darknet.py文件中可以找到,名字可能有些许不同 我用的代码是这个:GitHub链接 https://github.com/ayooshkathuria/pytorch-yolo-v3

原文链接:https://blog.csdn.net/McEason/article/details/100023124

duanyuqi987 commented 4 years ago

module ‘tensorflow’ has no attribute ‘get_default_graph’?????????????

当我使用keras和tensorflow做深度学习的时候,python3.6报了这个错误,这个问题源自于keras和TensorFlow的版本过高导致模块不存在或者已经更改不再兼容

解决办法,降级TensorFlow和keras

pip uninstall tensorflow # 卸载tf

pip uninstall keras # 卸载keras

安装1.2.0的tf 和 2.0.9的keras

pip install tensorflow==1.2.0

pip install keras==2.0.9

原文链接:https://blog.csdn.net/u014466109/article/details/88877321