Open cnfjsss opened 5 years ago
import torch import torchvision import torchvision.models as models import argparse import torch.nn as nn from torch.nn.parallel import DistributedDataParallel from ssd.modeling.detector import build_detection_model from ssd.config import cfg from ssd.utils.checkpoint import CheckPointer
def main(): parser = argparse.ArgumentParser(description="SSD weights file converter.") parser.add_argument( "--config-file", default="", metavar="FILE", help="path to config file", type=str, ) parser.add_argument("--ckpt", type=str, default=None, help="Trained weights.")
args = parser.parse_args()
cfg.merge_from_file(args.config_file)
cfg.freeze()
#device = torch.device("cuda")
device = torch.device("cpu")
model = build_detection_model(cfg)
model = model.to(device)
print("###########finish building model...")
state = torch.load(args.ckpt, map_location=torch.device("cpu"))
if isinstance(model, DistributedDataParallel):
model = model.module
model.load_state_dict(state['model'], strict=True)
model.eval()
example = torch.rand(1, 3, 300, 300)
traced_script_module = torch.jit.trace(model, example, optimize=False, check_trace=False)
output = traced_script_module(example )
traced_script_module.save('./outputs/vgg_ssd300_battery/vgg_ssd300_model.pt')
=============================================================== Error occur in ssd\modeling\box_head\inference.py. Relative info as follow, I don't know how :
C:\workspace\deeplearning\SSD_battery>python modelConvert.py --config-file configs/vgg_ssd300_battery.yaml --ckpt outputs/vgg_ssd300_battery/model_011100.pth
C:\workspace\deeplearning\SSD_battery\ssd\modeling\detector\ssd_detector.py
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\box_head.py
C:\workspace\deeplearning\SSD_battery\ssd\modeling\anchors\prior_box.py:51: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
priors = torch.tensor(priors)
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:19: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
for batch_id in range(batch_size):
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:25: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
for class_id in range(1, per_img_scores.size(1)): # skip background
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:29: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if scores.size(0) == 0:
Traceback (most recent call last):
File "modelConvert.py", line 98, in
========================================================== The error code seems is: nmsed_labels = torch.tensor([class_id] * keep.size(0), device=device)
--->And I try to modify it as
nmsed_labels = torch.tensor(torch.tensor([class_id]) * keep.size(0), device=device)
C:\workspace\deeplearning\SSD_battery>python modelConvert.py --config-file configs/vgg_ssd300_battery.yaml --ckpt outputs/vgg_ssd300_battery/model_011100.pth
C:\workspace\deeplearning\SSD_battery\ssd\modeling\detector\ssd_detector.py
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\box_head.py
C:\workspace\deeplearning\SSD_battery\ssd\modeling\anchors\prior_box.py:51: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
priors = torch.tensor(priors)
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:19: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
for batch_id in range(batch_size):
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:25: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
for class_id in range(1, per_img_scores.size(1)): # skip background
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:29: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if scores.size(0) == 0:
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:38: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
nmsed_labels = torch.tensor(torch.tensor([class_id]) keep.size(0), device=device)
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:38: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor).
nmsed_labels = torch.tensor(torch.tensor([class_id]) keep.size(0), device=device)
C:\workspace\deeplearning\SSD_battery\ssd\modeling\box_head\inference.py:54: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if processed_boxes.size(0) > self.cfg.TEST.MAX_PER_IMAGE > 0:
C:\ProgramData\Anaconda3\lib\site-packages\torch\tensor.py:435: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
'incorrect results).', category=RuntimeWarning)
Traceback (most recent call last):
File "modelConvert.py", line 98, in
======>I don't know how to resolve it, would you please give out a example for it? Thanks a lot.
By the way, It works ok when I using your demo.py to inference.
By the way, It works ok when I using your demo.py to inference. i am in trouble with demo.py, because the default ckpt vgg16_reducedfc.pth from the Internet has missing keys "model", could you please tell me how to fix it or share a correct pth file?
@cnfjsss Hi, I want to export the ScriptModule too. But i get an error when i trace the model:
torch.jit.TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
I also use the demo.py to export but it doesn't work. Can you show me how to get the trace module?
@cnfjsss Hi, I want to export the ScriptModule too. But i get an error when i trace the model:
torch.jit.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations!
I also use the demo.py to export but it doesn't work. Can you show me how to get the trace module?
my pytorch version is 1.4
Hi, luffic, I used your SSD model to train and output a weight file (.pth) on win7. And I want to inference in C++. But I got a problem when I converting. My converting code as follows: