Event-AHU / VcT_Remote_Sensing_Change_Detection

[IEEE TGRS-2023] VcT: Visual change Transformer for Remote Sensing Image Change Detection
43 stars 1 forks source link

some issues occuered when caculate inference time #9

Open peach11ok opened 4 months ago

peach11ok commented 4 months ago

hello,we have met some issues when computing inference time. Below is th code we used. ` import torch from models.networks import Reliable_Transformer

iterations = 300 # 重复计算的轮次

model = Reliable_Transformer(input_nc=3, output_nc=2, resnet_stages_num=4, with_pos='learned', enc_depth=1, dec_depth=8) device = torch.device("cuda:0") model.to(device)

random_input1 = torch.randn(1, 3, 256, 256).to(device) random_input2 = torch.randn(1, 3, 256, 256).to(device) starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True)

GPU预热

for in range(50): = model(random_input1, random_input2)

测速

times = torch.zeros(iterations) # 存储每轮iteration的时间 with torch.nograd(): for iter in range(iterations): starter.record() = model(random_input1, random_input2) ender.record()

同步GPU时间

    torch.cuda.synchronize()
    curr_time = starter.elapsed_time(ender) # 计算时间
    times[iter] = curr_time
    # print(curr_time)

mean_time = times.mean().item() print("Inference time: {:.6f}, FPS: {} ".format(mean_time, 1000/mean_time)) And the error is: Traceback (most recent call last): File "inferencetime.py", line 17, in = model(random_input1, random_input2) File "/root/anaconda3/envs/ztenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/mnt/zt/VcT/VcT_code/models/networks.py", line 364, in forward A = self.knngraph(x3) # b hw hw File "/mnt/zt/VcT/VcT_code/models/networks.py", line 352, in knngraph d = torch.sparse.FloatTensor(indice1, vals, torch.Size([b n, n])).to_dense() RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. `

Can you help us?