lix19937 / tensorrt-insight

Deep insight tensorrt, including but not limited to qat, ptq, plugin, triton_inference, cuda
12 stars 0 forks source link

lidar_seg train log #1

Open lix19937 opened 1 year ago

lix19937 commented 1 year ago
2023-06-05 22:55:46.677 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:104
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.2887e-02,  1.7004e-01,  7.7665e-03, -1.2065e-03,  3.1058e-02,
         -5.5939e-02,  3.9869e-40,  2.6908e-01, -1.3084e-01, -6.1333e-02,
         -6.7053e-02,  1.3559e-02, -1.3859e-01,  3.8813e-03, -8.3947e-02,
         -2.0623e-02, -2.0725e-02, -1.1800e-01,  1.6839e-01, -4.7589e-02,
         -1.6048e-01,  5.6010e-40, -3.4688e-03,  3.8975e-40, -2.8174e-03,
          7.7292e-02, -7.7709e-02, -8.1481e-03, -9.5860e-02,  6.1986e-02,
          4.2985e-01,  3.2533e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.2847e-40,  5.4634e-40, -5.9673e-40, -4.2663e-40,  1.8787e-40,
         -3.0531e-40, -5.7855e-40,  7.1423e-40,  3.9641e-40,  4.8552e-40,
         -4.5281e-09, -7.2763e-40,  1.9824e-40, -1.8262e-40,  5.1024e-40,
         -7.0874e-40, -3.3623e-40,  3.6098e-40,  4.4807e-40,  2.7522e-40,
          7.2756e-40, -3.8097e-08,  7.3738e-40, -1.0897e-40, -1.1436e-07,
          6.7669e-40,  6.1741e-40,  4.5877e-07,  7.0437e-40, -5.7909e-41,
          6.3997e-40, -7.5440e-40]], device='cuda:0')
epoch 104, Update LR to 0.00010879620454997907, from IR 0.00010897164694917426
lidarv5-tmp-local-shuffle-0527/default |################################| train: [105][1652/1653]|Tot: 1:06:38 |ETA: 0:00:03 |loss 0.2448 |Data 0.001s(0.144s) |Net 2.419s          

2023-06-06 00:02:29.431 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:105
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.4970e-02,  1.6814e-01,  8.9475e-03, -1.8581e-03,  3.1485e-02,
         -4.8718e-02, -5.6179e-40,  2.6865e-01, -1.2469e-01, -5.6836e-02,
         -6.3926e-02,  1.0479e-02, -1.3709e-01,  7.9077e-03, -8.7664e-02,
         -2.1172e-02, -2.0327e-02, -1.1741e-01,  1.6451e-01, -5.0475e-02,
         -1.6287e-01, -5.3757e-40, -4.2388e-03, -3.4203e-40, -7.6826e-04,
          7.6541e-02, -7.8524e-02, -9.0987e-03, -9.3053e-02,  5.8774e-02,
          4.3109e-01,  3.2294e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-7.0461e-40, -5.5133e-40,  6.2291e-40,  4.2712e-40, -1.7802e-40,
          3.0451e-40,  5.1912e-40, -6.8836e-40, -3.3537e-40, -4.9019e-40,
          3.4148e-10,  7.0544e-40, -1.6765e-40,  1.8327e-40, -5.8743e-40,
          6.9384e-40,  3.9555e-40, -3.7080e-40, -5.2764e-40, -3.3460e-40,
         -7.3601e-40, -7.7681e-09, -7.1094e-40,  1.3496e-40,  3.9451e-08,
         -6.4967e-40, -6.0222e-40, -1.3735e-08, -7.2870e-40,  6.4055e-41,
         -6.8639e-40,  7.7015e-40]], device='cuda:0')
epoch 105, Update LR to 0.00010862069139356363, from IR 0.00010879620454997907
lidarv5-tmp-local-shuffle-0527/default |################################| train: [106][1652/1653]|Tot: 1:22:06 |ETA: 0:00:03 |loss 0.2430 |Data 0.001s(0.154s) |Net 2.980s         

2023-06-06 01:24:39.407 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:106
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3616e-02,  1.6502e-01,  1.1459e-02, -7.3224e-04,  3.1575e-02,
         -4.5815e-02, -1.3834e-41,  2.6554e-01, -1.2657e-01, -5.6097e-02,
         -6.0077e-02,  1.2217e-02, -1.3550e-01,  7.3790e-03, -9.0899e-02,
         -1.9378e-02, -1.4133e-02, -1.1718e-01,  1.6311e-01, -4.9545e-02,
         -1.6341e-01,  5.5834e-40,  6.3312e-06,  2.6680e-40, -3.5029e-03,
          7.7257e-02, -8.0032e-02, -7.0960e-03, -9.4800e-02,  5.9132e-02,
          4.2985e-01,  2.9123e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.2616e-40,  5.4458e-40, -5.9476e-40, -4.2525e-40,  1.8728e-40,
         -3.0433e-40, -4.5502e-40,  6.3586e-40,  2.7346e-40,  4.8395e-40,
          1.1475e-09, -7.2532e-40,  1.9765e-40, -1.8203e-40,  6.3024e-40,
         -7.3693e-40, -4.5682e-40,  3.5980e-40,  5.6826e-40,  3.9600e-40,
          7.0998e-40,  1.6372e-06,  6.8939e-40, -1.0858e-40, -1.9956e-06,
          6.8977e-40,  6.1545e-40, -9.1775e-06,  7.0207e-40, -5.7712e-41,
          7.1394e-40, -7.5194e-40]], device='cuda:0')

epoch 106, Update LR to 0.00010844510730836042, from IR 0.00010862069139356363
lidarv5-tmp-local-shuffle-0527/default |################################| train: [107][1652/1653]|Tot: 2:29:39 |ETA: 0:00:03 |loss 0.2432 |Data 0.001s(3.252s) |Net 5.432s          

2023-06-06 03:54:22.916 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:107
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.1089e-02,  1.6508e-01,  1.5741e-02, -3.3447e-03,  3.4830e-02,
         -4.7827e-02,  6.2442e-40,  2.6557e-01, -1.2535e-01, -5.2924e-02,
         -5.5817e-02,  1.5256e-02, -1.3657e-01,  7.2280e-03, -9.1440e-02,
         -1.9196e-02, -1.3112e-02, -1.1322e-01,  1.6417e-01, -5.0238e-02,
         -1.6143e-01, -5.3580e-40, -1.4572e-03, -2.1948e-40, -6.7317e-03,
          8.0151e-02, -8.0745e-02, -6.2427e-03, -1.0095e-01,  6.4362e-02,
          4.3042e-01,  2.9105e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-7.0230e-40, -5.4956e-40,  6.2095e-40,  4.2574e-40, -1.7743e-40,
          3.0353e-40,  3.9598e-40, -5.7985e-40, -2.1282e-40, -4.8862e-40,
         -3.5517e-09,  7.0314e-40, -1.6707e-40,  1.8268e-40, -5.8547e-40,
          6.9154e-40,  5.1575e-40, -3.6963e-40, -5.2588e-40, -4.5500e-40,
         -6.8809e-40, -1.8634e-08, -6.3270e-40,  1.3457e-40,  3.1502e-08,
         -6.4751e-40, -6.0026e-40, -7.1252e-08, -7.2639e-40,  6.3859e-41,
         -7.2972e-40,  7.6770e-40]], device='cuda:0')
epoch 107, Update LR to 0.00010826945212203765, from IR 0.00010844510730836042
lidarv5-tmp-local-shuffle-0527/default |################################| train: [108][1652/1653]|Tot: 4:58:28 |ETA: 0:00:02 |loss 0.2433 |Data 0.001s(1.817s) |Net 10.834s         

2023-06-06 08:52:55.265 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:108
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.4488e-02,  1.6699e-01,  1.7869e-02, -8.0357e-03,  3.4708e-02,
         -4.9619e-02, -3.3140e-40,  2.6746e-01, -1.2720e-01, -5.3761e-02,
         -5.6161e-02,  1.3767e-02, -1.3474e-01,  8.4116e-03, -9.2662e-02,
         -1.8048e-02, -1.1099e-02, -1.1339e-01,  1.6425e-01, -5.4869e-02,
         -1.6103e-01,  5.5657e-40, -5.6524e-03,  2.6602e-40, -3.8826e-03,
          8.1553e-02, -7.7865e-02, -7.7385e-03, -1.0737e-01,  6.7119e-02,
          4.3305e-01,  3.2209e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.2385e-40,  5.4281e-40, -5.9280e-40, -4.2388e-40,  1.8670e-40,
         -3.0335e-40, -3.3227e-40,  5.1253e-40,  1.5130e-40,  4.8238e-40,
          1.6670e-11, -7.2301e-40,  1.9706e-40, -1.8145e-40,  6.2828e-40,
         -7.3462e-40, -5.7662e-40,  3.5862e-40,  5.6650e-40,  5.1600e-40,
          6.3186e-40,  6.3764e-08,  5.8105e-40, -1.0818e-40,  1.3147e-08,
          6.8762e-40,  6.1349e-40, -9.3835e-08,  6.9976e-40, -5.7516e-41,
          7.2678e-40, -7.4949e-40]], device='cuda:0')
epoch 108, Update LR to 0.00010809372566149436, from IR 0.00010826945212203765
lidarv5-tmp-local-shuffle-0527/default |################################| train: [109][1652/1653]|Tot: 3:23:04 |ETA: 0:00:03 |loss 0.2431 |Data 0.001s(0.136s) |Net 7.371s          

2023-06-06 12:16:03.893 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:109
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3816e-02,  1.6722e-01,  1.9102e-02, -1.0957e-02,  3.7341e-02,
         -4.9487e-02, -2.4052e-40,  2.6676e-01, -1.2916e-01, -5.2166e-02,
         -5.4108e-02,  1.3716e-02, -1.3244e-01,  8.0332e-03, -9.5063e-02,
         -1.6073e-02, -1.2728e-02, -1.1164e-01,  1.6116e-01, -5.1300e-02,
         -1.6068e-01, -5.3401e-40, -9.5238e-03, -2.1869e-40, -6.5004e-03,
          7.8925e-02, -7.8060e-02, -7.3987e-03, -1.0823e-01,  6.9440e-02,
          4.3400e-01,  3.3983e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-6.9997e-40, -5.4777e-40,  6.1896e-40,  4.2435e-40, -1.7683e-40,
          3.0253e-40,  2.7361e-40, -4.5688e-40, -9.1048e-41, -4.8702e-40,
          1.9124e-10,  7.0080e-40, -1.6647e-40,  1.8208e-40, -5.8348e-40,
          6.8920e-40,  6.3514e-40, -3.6843e-40, -5.2408e-40, -5.7458e-40,
         -5.7990e-40, -7.0300e-09, -5.0953e-40,  1.3417e-40, -4.7939e-08,
         -6.4532e-40, -5.9827e-40,  2.0284e-07, -7.2406e-40,  6.3660e-41,
         -7.1219e-40,  7.6521e-40]], device='cuda:0')
epoch 109, Update LR to 0.00010791792775285545, from IR 0.00010809372566149436
lidarv5-tmp-local-shuffle-0527/default |################################| train: [110][1652/1653]|Tot: 1:32:18 |ETA: 0:00:03 |loss 0.2435 |Data 0.001s(0.163s) |Net 3.350s         

2023-06-06 13:48:26.028 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:110
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3029e-02,  1.6674e-01,  2.0139e-02, -1.4303e-02,  4.0977e-02,
         -4.8887e-02,  6.9707e-40,  2.6520e-01, -1.3121e-01, -5.7170e-02,
         -5.3660e-02,  1.4425e-02, -1.3138e-01,  6.8347e-03, -9.1855e-02,
         -1.2724e-02, -1.3704e-02, -1.1018e-01,  1.5807e-01, -4.9353e-02,
         -1.6057e-01,  5.5480e-40, -1.3748e-02,  2.6523e-40, -1.2608e-02,
          7.4852e-02, -8.0278e-02, -5.7886e-03, -1.1057e-01,  7.1755e-02,
          4.3413e-01,  3.4258e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.2155e-40,  5.4105e-40, -5.9084e-40, -4.2251e-40,  1.8611e-40,
         -3.0237e-40, -2.1031e-40,  3.8998e-40,  2.9932e-41,  4.8081e-40,
         -6.6405e-12, -7.2071e-40,  1.9647e-40, -1.8086e-40,  6.2632e-40,
         -7.3231e-40, -6.8052e-40,  3.5744e-40,  5.6473e-40,  6.3522e-40,
          6.2990e-40, -5.1814e-08,  4.5830e-40, -1.0779e-40,  1.0324e-07,
          6.8546e-40,  6.1153e-40,  7.3898e-08,  6.9745e-40, -5.7320e-41,
          6.7908e-40, -7.4704e-40]], device='cuda:0')
epoch 110, Update LR to 0.0001077420582214665, from IR 0.00010791792775285545
lidarv5-tmp-local-shuffle-0527/default |################################| train: [111][1652/1653]|Tot: 4:01:29 |ETA: 0:15:17 |loss 0.2427 |Data 0.001s(0.194s) |Net 8.765s          

2023-06-06 17:49:58.988 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:111
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3735e-02,  1.6594e-01,  2.0995e-02, -1.3864e-02,  4.3143e-02,
         -4.6675e-02, -1.0313e-40,  2.6502e-01, -1.3447e-01, -6.1970e-02,
         -4.9949e-02,  1.6221e-02, -1.2958e-01,  7.3152e-03, -9.2446e-02,
         -1.1339e-02, -9.1593e-03, -1.1090e-01,  1.5972e-01, -4.8098e-02,
         -1.6073e-01, -5.3225e-40, -9.6607e-03, -2.1790e-40, -1.7980e-02,
          7.6374e-02, -8.2092e-02, -3.9734e-03, -1.0985e-01,  7.5603e-02,
          4.3414e-01,  3.2850e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-6.9766e-40, -5.4600e-40,  6.1699e-40,  4.2298e-40, -1.7624e-40,
          3.0155e-40,  1.5204e-40, -3.3472e-40,  2.9932e-41, -4.8545e-40,
          2.6280e-40,  6.9850e-40, -1.6588e-40,  1.8149e-40, -5.8152e-40,
          6.8689e-40,  7.0849e-40, -3.6726e-40, -5.2232e-40, -6.7830e-40,
         -5.7794e-40,  7.6072e-08, -5.0796e-40,  1.3377e-40,  1.2533e-07,
         -6.4316e-40, -5.9631e-40,  1.9105e-07, -7.2175e-40,  6.3463e-41,
         -6.3444e-40,  7.6276e-40]], device='cuda:0')
epoch 111, Update LR to 0.00010756611689188883, from IR 0.0001077420582214665
lidarv5-tmp-local-shuffle-0527/default |################################| train: [112][1652/1653]|Tot: 2:37:22 |ETA: 0:00:03 |loss 0.2429 |Data 0.001s(0.147s) |Net 5.713s         

2023-06-06 20:27:26.031 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:112
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.5328e-02,  1.6668e-01,  2.4832e-02, -1.5430e-02,  4.3440e-02,
         -4.5102e-02, -4.6489e-40,  2.6581e-01, -1.3361e-01, -6.2535e-02,
         -4.8935e-02,  1.4702e-02, -1.3255e-01,  4.8784e-03, -9.1641e-02,
         -1.1174e-02, -6.1822e-03, -1.1170e-01,  1.6069e-01, -4.9373e-02,
         -1.6282e-01,  5.5301e-40, -7.4490e-03,  2.6444e-40, -1.7399e-02,
          7.9019e-02, -8.3373e-02, -1.6541e-03, -1.0684e-01,  7.6596e-02,
          4.3532e-01,  3.3184e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.1921e-40,  5.3926e-40, -5.8885e-40, -4.2111e-40,  1.8551e-40,
         -3.0137e-40, -8.9127e-41,  3.8878e-40, -9.0653e-41,  4.7922e-40,
         -2.1954e-40, -7.1837e-40,  1.9587e-40, -1.8026e-40,  6.2433e-40,
         -7.2998e-40, -7.2345e-40,  3.5625e-40,  5.6294e-40,  7.0842e-40,
          6.2791e-40, -3.6609e-08,  4.5671e-40, -1.0739e-40, -9.9773e-09,
          6.8327e-40,  6.0954e-40, -3.8545e-07,  6.9512e-40, -5.7121e-41,
          5.7140e-40, -7.4455e-40]], device='cuda:0')
epoch 112, Update LR to 0.00010739010358789431, from IR 0.00010756611689188883
lidarv5-tmp-local-shuffle-0527/default |################################| train: [113][1652/1653]|Tot: 4:16:20 |ETA: 0:00:03 |loss 0.2431 |Data 0.001s(0.155s) |Net 9.305s          

2023-06-07 00:43:51.296 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:113
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3606e-02,  1.6583e-01,  2.8010e-02, -1.6459e-02,  4.5357e-02,
         -4.1937e-02,  4.8317e-40,  2.6461e-01, -1.2871e-01, -5.9981e-02,
         -4.6495e-02,  1.5602e-02, -1.3804e-01,  5.6710e-03, -8.9051e-02,
         -9.3686e-03, -6.5640e-03, -1.1087e-01,  1.6000e-01, -4.7224e-02,
         -1.6344e-01, -5.3048e-40, -6.9732e-03, -2.1712e-40, -2.4918e-02,
          8.0611e-02, -8.7258e-02, -1.9810e-03, -1.0488e-01,  7.7706e-02,
          4.3535e-01,  3.1565e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-6.9536e-40, -5.4424e-40,  6.1503e-40,  4.2160e-40, -1.7565e-40,
          3.0057e-40,  3.1262e-41, -3.3355e-40,  1.5012e-40, -4.8389e-40,
          2.6202e-40,  6.9619e-40, -1.6529e-40,  1.8090e-40, -5.7956e-40,
          6.8459e-40,  7.2121e-40, -3.6608e-40, -5.2055e-40, -7.2119e-40,
         -5.7597e-40,  8.1427e-09, -5.0639e-40,  1.3338e-40, -2.3228e-09,
         -6.4100e-40, -5.9435e-40,  2.2931e-07, -7.1945e-40,  6.3267e-41,
         -5.1209e-40,  7.6030e-40]], device='cuda:0')
epoch 113, Update LR to 0.0001072140181324601, from IR 0.00010739010358789431
lidarv5-tmp-local-shuffle-0527/default |################################| train: [114][1652/1653]|Tot: 1:20:20 |ETA: 0:00:03 |loss 0.2424 |Data 0.001s(0.139s) |Net 2.916s         

2023-06-07 02:04:16.262 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:114
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.3471e-02,  1.6651e-01,  3.0158e-02, -1.5612e-02,  4.3737e-02,
         -3.8028e-02,  1.2260e-40,  2.6557e-01, -1.3308e-01, -6.2753e-02,
         -4.5945e-02,  9.6441e-03, -1.3712e-01,  6.3991e-03, -8.8163e-02,
         -8.5253e-03, -7.5668e-04, -1.1493e-01,  1.5635e-01, -4.6756e-02,
         -1.6808e-01,  5.5125e-40, -7.9192e-04,  2.6365e-40, -3.1136e-02,
          7.9366e-02, -8.8671e-02, -4.6701e-03, -1.0305e-01,  7.4724e-02,
          4.3699e-01,  3.2393e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[ 7.1690e-40,  5.3749e-40, -5.8689e-40, -4.1974e-40,  1.8492e-40,
         -3.0039e-40, -8.8931e-41,  3.8761e-40, -2.1045e-40,  4.7765e-40,
         -2.1875e-40, -7.1606e-40,  1.9528e-40, -1.7967e-40,  6.2236e-40,
         -7.2767e-40, -7.0607e-40,  3.5507e-40,  5.6118e-40,  7.2111e-40,
          6.2595e-40, -9.3816e-09,  4.5514e-40, -1.0700e-40, -1.1353e-07,
          6.8111e-40,  6.0758e-40,  1.4323e-07,  6.9281e-40, -5.6925e-41,
          5.6964e-40, -7.4210e-40]], device='cuda:0')
epoch 114, Update LR to 0.00010703786034776358, from IR 0.0001072140181324601
lidarv5-tmp-local-shuffle-0527/default |################################| train: [115][1652/1653]|Tot: 1:50:40 |ETA: 0:00:03 |loss 0.2438 |Data 0.001s(0.134s) |Net 4.017s          

2023-06-07 03:54:59.927 | INFO     | models.model:save_model:117 - save_model, model:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth, epoch:115
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')            
conv1.bias
tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]], device='cuda:0')
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]], device='cuda:0')
epoch 115, Update LR to 0.000106861630055177, from IR 0.00010703786034776358
lidarv5-tmp-local-shuffle-0527/default |###################             | train: [116][1017/1653]|Tot: 6:24:33 |ETA: 0:22:17 |loss 0.2406 |Data 0.001s(0.228s) |Net 22.666s         [E ProcessGroupNCCL.cpp:587] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18002499 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18004752 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18005102 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
lidarv5-tmp-local-shuffle-0527/default |###################             | train: [116][1018/1653]|Tot: 11:24:42 |ETA: 0:22:02 |loss 0.2430 |Data 0.001s(4.305s) |Net 40.316s        terminate called after throwing an instance of 'std::runtime_error'
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18005102 milliseconds before timing out.
  what():  [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18004752 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18002499 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18008064 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18008262 milliseconds before timing out.
lidarv5-tmp-local-shuffle-0527/default |###################             | train: [116][1018/1653]|Tot: 11:24:44 |ETA: 0:22:02 |loss 0.2406 |Data 0.001s(0.227s) |Net 40.318s        [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18008262 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=18000000) ran for 18008064 milliseconds before timing out.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 81719 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 81720 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 81722 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 81717) of binary: /usr/local/anaconda3/envs/CenterNet/bin/python
Traceback (most recent call last):
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/run.py", line 710, in run
    elastic_launch(
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
======================================================
main.py FAILED
------------------------------------------------------
Failures:
[1]:
  time      : 2023-06-07_15:19:52
  host      : localhost
  rank      : 1 (local_rank: 1)
  exitcode  : -6 (pid: 81718)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 81718
[2]:
  time      : 2023-06-07_15:19:52
  host      : localhost
  rank      : 4 (local_rank: 4)
  exitcode  : -6 (pid: 81721)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 81721
------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-06-07_15:19:52
  host      : localhost
  rank      : 0 (local_rank: 0)
  exitcode  : -6 (pid: 81717)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 81717
======================================================
(CenterNet) nvidia@SSADL3816:/data/ljw/seg_train_nfs/seg/source$ 

(CenterNet) nvidia@SSADL3816:/data/ljw/seg_train_nfs/seg/source$ sh ./train_qat_local_continue.sh 
/usr/local/anaconda3/envs/CenterNet/lib/python3.9/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
2023-06-07 15:45:15.232 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
2023-06-07 15:45:15.232 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
2023-06-07 15:45:15.232 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
2023-06-07 15:45:15.232 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
2023-06-07 15:45:15.232 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
2023-06-07 15:45:15.233 | INFO     | quantization.quantize_lx:<module>:31 - ['/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn']
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to The output will be saved to   /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test

The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to The output will be saved to The output will be saved to   /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test

The output will be saved to  /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test
The output will be saved to The output will be saved to   /Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test

Creating model...
Creating model...
Creating model...
Creating model...
Creating model...
2023-06-07 15:45:15.330 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:156 - Init quan 3...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:156 - Init quan 1...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:156 - Init quan 5...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:156 - Init quan 4...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:156 - Init quan 2...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:159 - Build QDQ model 1...
2023-06-07 15:45:15.331 | INFO     | nv_calib:run:159 - Build QDQ model 3...
2023-06-07 15:45:15.331 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
2023-06-07 15:45:15.331 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
2023-06-07 15:45:15.332 | INFO     | nv_calib:run:159 - Build QDQ model 4...
2023-06-07 15:45:15.332 | INFO     | nv_calib:run:159 - Build QDQ model 2...
2023-06-07 15:45:15.332 | INFO     | nv_calib:run:159 - Build QDQ model 5...
2023-06-07 15:45:15.332 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
2023-06-07 15:45:15.332 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
2023-06-07 15:45:15.332 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
Namespace(task='lidarv5-tmp-local-shuffle-0527', local_rank=0, dataset='lidar128', test=False, load_model='/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth', resume=True, gpus=[0, 1, 2, 3, 4, 5], num_workers=112, not_cuda_benchmark=False, seed=317, print_iter=0, hide_data_time=False, save_all=True, metric='loss', vis_thresh=0.3, debugger_theme='white', arch='salsa', down_ratio=4, input_res=-1, input_h=-1, input_w=-1, input_c=16, lr=0.000125, num_epochs=600, batch_size=16, num_iters=-1, val_intervals=5, trainval=False, aug_lidar=0.8, ignore_index=10, align_size=100000, weight_decay=1e-05, exp_id='test', user_spec=True, qdq=True, onnx_out='lidarnet_seg_qat.onnx', fp32_ckpt_file='/Data/ljw/seg_train_nfs/seg/exp/lidarv5-tmp-local-shuffle-0527/test/model_last.pth', ptq_pth_file='', calib_dataset_path='', exec_calib=False, gpus_str='0,1,2,3,4,5', root_dir='/Data/ljw/seg_train_nfs/seg/source/lib/../..', data_dir='/Data/ljw/seg_train_nfs/seg/source/lib/../../data', exp_dir='/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527', save_dir='/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test', debug_dir='/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test/debug')
Creating model...
2023-06-07 15:45:15.787 | INFO     | nv_calib:run:148 - 1.10.2+cu111
2023-06-07 15:45:15.787 | INFO     | nv_calib:run:149 - Parse...
2023-06-07 15:45:15.788 | INFO     | nv_calib:run:156 - Init quan 0...
2023-06-07 15:45:15.788 | INFO     | nv_calib:run:159 - Build QDQ model 0...
2023-06-07 15:45:15.789 | INFO     | lib.models.model:create_model_quan:22 - create_model phase:train, qdq:True
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/Data/ljw/seg_train_nfs/seg/pytorch-quantization_v2.1.0/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
2023-06-07 15:45:28.237 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.239 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.239 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.240 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.240 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:28.354 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.355 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.356 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.356 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.356 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:28.398 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.399 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.400 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.400 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.401 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:28.454 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.456 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.456 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.456 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.457 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:28.508 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.509 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.510 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.510 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.511 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:28.589 | INFO     | nv_calib:make_model:58 - checkpoint epoch:115
2023-06-07 15:45:28.591 | INFO     | nv_calib:make_model:62 - checkpoint conv1.bias:tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
2023-06-07 15:45:28.592 | INFO     | nv_calib:make_model:63 - checkpoint conv2.bias:tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:28.592 | INFO     | nv_calib:make_model:65 - checkpoint conv1._input_quantizer._amax:tensor([[1539.4375]])
2023-06-07 15:45:28.592 | INFO     | nv_calib:make_model:66 - checkpoint conv1._weight_quantizer._amax:tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]])
2023-06-07 15:45:32.433 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:32.608 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:32.636 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:32.692 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:32.700 | INFO     | nv_calib:make_model:73 - epoch:115
conv1._input_quantizer
tensor([[1539.4375]], device='cuda:0')
conv1._weight_quantizer
tensor([[0.5576, 0.5451, 0.3167, 0.3039, 0.4727, 0.4320, 0.1625, 0.4627, 0.3156,
         0.3947, 0.5705, 0.6998, 0.5736, 0.6587, 0.5044, 0.5329, 0.3364, 0.6277,
         0.7105, 0.4603, 0.2333, 0.2680, 0.4587, 0.2870, 0.3610, 0.3637, 0.3891,
         0.7792, 0.3351, 0.3386, 0.1254, 0.8209]], device='cuda:0')              
conv1.bias
tensor([[-4.1348e-02,  1.6472e-01,  3.2278e-02, -1.5598e-02,  4.7575e-02,
         -3.1791e-02, -6.7235e-40,  2.6458e-01, -1.3766e-01, -6.4600e-02,
         -4.6662e-02,  1.0827e-02, -1.4016e-01,  2.2446e-03, -8.2754e-02,
         -8.0155e-03,  7.9800e-03, -1.1467e-01,  1.5884e-01, -4.6665e-02,
         -1.7129e-01, -5.2869e-40,  5.0063e-03, -2.1632e-40, -4.1148e-02,
          8.1584e-02, -9.4614e-02, -6.8896e-03, -1.0554e-01,  8.0332e-02,
          4.3656e-01,  2.9243e-02]])
conv2._input_quantizer
tensor([[883.1326]], device='cuda:0')
conv2._weight_quantizer
tensor([[0.0743, 0.0991, 0.0849, 0.0749, 0.1368, 0.0897, 0.1589, 0.1310, 0.1057,
         0.0659, 0.1254, 0.0821, 0.1108, 0.1095, 0.1166, 0.0886, 0.1466, 0.0909,
         0.1240, 0.0732, 0.0986, 0.0759, 0.0796, 0.1083, 0.2092, 0.1230, 0.0817,
         0.1350, 0.0790, 0.0874, 0.0908, 0.0899]], device='cuda:0')
conv2.bias
tensor([[-6.9302e-40, -5.4245e-40,  6.1304e-40,  4.2021e-40, -1.7506e-40,
          2.9957e-40,  3.1063e-41, -3.3235e-40,  3.8951e-40, -4.8229e-40,
          2.6122e-40,  6.9386e-40, -1.6469e-40,  1.8031e-40, -5.7757e-40,
          6.8225e-40,  7.6385e-40, -3.6489e-40, -5.1876e-40, -7.7881e-40,
         -5.7398e-40, -1.9782e-08, -5.0480e-40,  1.3298e-40,  2.8949e-07,
         -6.3881e-40, -5.9236e-40,  7.4507e-08, -7.1711e-40,  6.3068e-41,
         -3.9031e-40,  7.5782e-40]])
2023-06-07 15:45:32.795 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:33.119 | INFO     | nv_calib:run:183 - Done
Resumed optimizer with start lr:0.000106861630055177  @ start_epoch:115
Setting up data...
2023-06-07 15:45:34.116 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.116 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.130 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.134 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.136 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.141 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 158735 train images
2023-06-07 15:45:34.466 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
2023-06-07 15:45:34.471 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
2023-06-07 15:45:34.476 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
2023-06-07 15:45:34.480 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
2023-06-07 15:45:34.480 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
2023-06-07 15:45:34.480 | INFO     | datasets.dataset.mini_data:__init__:45 - phase:train, loaded 52281 val images
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
Starting training...save_dir:/Data/ljw/seg_train_nfs/seg/source/lib/../../exp/lidarv5-tmp-local-shuffle-0527/test start_epoch:116, num_epochs:600
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[W reducer.cpp:1303] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
lidarv5-tmp-local-shuffle-0527/default |##                              | train: [116][146/1653]|Tot: 0:10:04 |ETA: 0:52:18 |loss 0.2401 |Data 0.002s(1.697s) |Net 4.110s