Closed zhangzhang1524 closed 1 year ago
==> Preparing Data Loader... Traceback (most recent call last): File "/home/2022-CVPR-DART/run.py", line 504, in prob_A_V, prob_A_I = eval_train(net1, eval_loader, 'A') File "/home/2022-CVPR-DART/run.py", line 313, in eval_train losses_V_aug2[index_V[n1 + 32]] = loss1[n1 + 32] IndexError: index 32 is out of bounds for axis 0 with size 32
Hi, could you provide more information about your configuration (the hyper-parameters after ``python run.py")?
==> Preparing Data Loader... Traceback (most recent call last): File "/home/2022-CVPR-DART/run.py", line 504, in prob_A_V, prob_A_I = eval_train(net1, eval_loader, 'A') File "/home/2022-CVPR-DART/run.py", line 313, in eval_train losses_V_aug2[index_V[n1 + 32]] = loss1[n1 + 32] IndexError: index 32 is out of bounds for axis 0 with size 32
Hi, could you provide more information about your configuration (the hyper-parameters after ``python run.py")?
Sorry, I'm a beginner, I saw the cutting-edge paper and wanted to reproduce it, maybe I don't understand the terminology you said, I intercepted the information you may need, thank you for your reply. 抱歉,我是初学者,看到前沿论文想着复现一下,可能不太懂你说的术语,我把可能您需要的信息截取了一下,感谢您的回复。
parser = argparse.ArgumentParser(description='PyTorch Cross-Modality Training') parser.add_argument('--dataset', default='sysu', help='dataset name: regdb or sysu]') parser.add_argument('--lr', default=0.1, type=float, help='learning rate, 0.00035 for adam') parser.add_argument('--optim', default='sgd', type=str, help='optimizer') parser.add_argument('--arch', default='resnet50', type=str, help='network baseline:resnet18 or resnet50') parser.add_argument('--resume-net1', default='', type=str, help='resume net1 from checkpoint') parser.add_argument('--resume-net2', default='', type=str, help='resume net2 from checkpoint') parser.add_argument('--model_path', default='./save_model/', type=str, help='model save path') parser.add_argument('--save_epoch', default=20, type=int, metavar='s', help='save model every 10 epochs') parser.add_argument('--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') parser.add_argument('--img_w', default=144, type=int, metavar='imgw', help='img width') parser.add_argument('--img_h', default=288, type=int, metavar='imgh', help='img height') parser.add_argument('--batch-size', default=4, type=int, metavar='B', help='training batch size') parser.add_argument('--test-batch', default=64, type=int, metavar='tb', help='testing batch size') parser.add_argument('--method', default='robust', type=str, metavar='m', help='method type: base or agw or robust') parser.add_argument('--loss1', default='sid', type=str, help='loss type: id or soft id') parser.add_argument('--loss2', default='robust_tri', type=str, metavar='m', help='loss type: wrt or adp or robust_tri') parser.add_argument('--margin', default=0.3, type=float, metavar='margin', help='triplet loss margin') parser.add_argument('--num_pos', default=4, type=int, help='num of pos per identity in each modality') parser.add_argument('--trial', default=1, type=int, metavar='t', help='trial (only for RegDB dataset)') parser.add_argument('--seed', default=0, type=int, metavar='t', help='random seed') parser.add_argument('--gpu', default='3', type=str, help='gpu device ids for CUDA_VISIBLE_DEVICES') parser.add_argument('--savename', default='release_sysu_dart_nr20', type=str, help='name of the saved model') parser.add_argument('--mode', default='all', type=str, help='all or indoor') parser.add_argument('--augc', default=1, type=int, metavar='aug', help='use channel aug or not') parser.add_argument('--rande', default=0.5, type=float, metavar='ra', help='use random erasing or not and the probability') parser.add_argument('--kl', default=0, type=float, metavar='kl', help='use kl loss and the weight') parser.add_argument('--alpha', default=1, type=int, metavar='alpha', help='magnification for the hard mining') parser.add_argument('--gamma', default=1, type=int, metavar='gamma', help='gamma for the hard mining') parser.add_argument('--square', default=1, type=int, metavar='square', help='gamma for the hard mining') parser.add_argument('--noise-mode', default='sym', type=str, help='sym') parser.add_argument('--noise-rate', default=0.2, type=float, metavar='nr', help='noise_rate') parser.add_argument('--data-path', default='./dataset/SYSU-MM01/', type=str, help='path to dataset') parser.add_argument('--p-threshold', default=0.5, type=float, help='clean probability threshold') parser.add_argument('--warm-epoch', default=1, type=int, help='epochs for net warming up')
Data Loading Time: 34.902 ==> Building model.. train with 0.2 noisy rates loading files and idx of noisy labels train with 0.2 noisy rates loading files and idx of noisy labels ==> Start Training... ==> Preparing Data Loader... Warmup Net1 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 1/1393] CE-loss: 5.9782 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 51/1393] CE-loss: 5.4853 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[101/1393] CE-loss: 6.1761 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[151/1393] CE-loss: 5.8664 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[201/1393] CE-loss: 4.9210 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[251/1393] CE-loss: 4.9776 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[301/1393] CE-loss: 4.7589 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[351/1393] CE-loss: 4.4946 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[401/1393] CE-loss: 4.7785 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[451/1393] CE-loss: 4.5223 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[501/1393] CE-loss: 3.3214 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[551/1393] CE-loss: 3.4842 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[601/1393] CE-loss: 3.0825 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[651/1393] CE-loss: 3.7328 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[701/1393] CE-loss: 3.3046 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[751/1393] CE-loss: 3.7783 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[801/1393] CE-loss: 3.6628 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[851/1393] CE-loss: 2.7544 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[901/1393] CE-loss: 2.0932 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[951/1393] CE-loss: 3.4492 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1001/1393] CE-loss: 2.9154 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1051/1393] CE-loss: 2.8697 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1101/1393] CE-loss: 3.4063 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1151/1393] CE-loss: 3.9795 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1201/1393] CE-loss: 2.0655 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1251/1393] CE-loss: 3.7501 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1301/1393] CE-loss: 4.2255 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1351/1393] CE-loss: 2.7369 Current-lr: 0.0100
Warmup Net2 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 1/1393] CE-loss: 5.9845 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 51/1393] CE-loss: 5.8238 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[101/1393] CE-loss: 5.7436 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[151/1393] CE-loss: 5.8638 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[201/1393] CE-loss: 4.6205 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[251/1393] CE-loss: 4.7066 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[301/1393] CE-loss: 5.2194 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[351/1393] CE-loss: 4.3184 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[401/1393] CE-loss: 4.4477 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[451/1393] CE-loss: 4.5132 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[501/1393] CE-loss: 3.3674 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[551/1393] CE-loss: 3.3225 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[601/1393] CE-loss: 3.1671 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[651/1393] CE-loss: 3.7594 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[701/1393] CE-loss: 3.4605 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[751/1393] CE-loss: 3.5731 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[801/1393] CE-loss: 3.6904 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[851/1393] CE-loss: 2.8062 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[901/1393] CE-loss: 2.0503 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[951/1393] CE-loss: 3.7040 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1001/1393] CE-loss: 2.5633 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1051/1393] CE-loss: 3.2262 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1101/1393] CE-loss: 3.4745 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1151/1393] CE-loss: 4.0567 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1201/1393] CE-loss: 2.3980 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1251/1393] CE-loss: 3.8579 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1301/1393] CE-loss: 4.0358 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1351/1393] CE-loss: 2.9340 Current-lr: 0.0100
==> Preparing Data Loader...
Traceback (most recent call last):
File "/home/2022-CVPR-DART/run.py", line 504, in
==> Preparing Data Loader... Traceback (most recent call last): File "/home/2022-CVPR-DART/run.py", line 504, in prob_A_V, prob_A_I = eval_train(net1, eval_loader, 'A') File "/home/2022-CVPR-DART/run.py", line 313, in eval_train losses_V_aug2[index_V[n1 + 32]] = loss1[n1 + 32] IndexError: index 32 is out of bounds for axis 0 with size 32
Hi, could you provide more information about your configuration (the hyper-parameters after ``python run.py")?
Sorry, I'm a beginner, I saw the cutting-edge paper and wanted to reproduce it, maybe I don't understand the terminology you said, I intercepted the information you may need, thank you for your reply. 抱歉,我是初学者,看到前沿论文想着复现一下,可能不太懂你说的术语,我把可能您需要的信息截取了一下,感谢您的回复。
parser = argparse.ArgumentParser(description='PyTorch Cross-Modality Training') parser.add_argument('--dataset', default='sysu', help='dataset name: regdb or sysu]') parser.add_argument('--lr', default=0.1, type=float, help='learning rate, 0.00035 for adam') parser.add_argument('--optim', default='sgd', type=str, help='optimizer') parser.add_argument('--arch', default='resnet50', type=str, help='network baseline:resnet18 or resnet50') parser.add_argument('--resume-net1', default='', type=str, help='resume net1 from checkpoint') parser.add_argument('--resume-net2', default='', type=str, help='resume net2 from checkpoint') parser.add_argument('--model_path', default='./save_model/', type=str, help='model save path') parser.add_argument('--save_epoch', default=20, type=int, metavar='s', help='save model every 10 epochs') parser.add_argument('--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') parser.add_argument('--img_w', default=144, type=int, metavar='imgw', help='img width') parser.add_argument('--img_h', default=288, type=int, metavar='imgh', help='img height') parser.add_argument('--batch-size', default=4, type=int, metavar='B', help='training batch size') parser.add_argument('--test-batch', default=64, type=int, metavar='tb', help='testing batch size') parser.add_argument('--method', default='robust', type=str, metavar='m', help='method type: base or agw or robust') parser.add_argument('--loss1', default='sid', type=str, help='loss type: id or soft id') parser.add_argument('--loss2', default='robust_tri', type=str, metavar='m', help='loss type: wrt or adp or robust_tri') parser.add_argument('--margin', default=0.3, type=float, metavar='margin', help='triplet loss margin') parser.add_argument('--num_pos', default=4, type=int, help='num of pos per identity in each modality') parser.add_argument('--trial', default=1, type=int, metavar='t', help='trial (only for RegDB dataset)') parser.add_argument('--seed', default=0, type=int, metavar='t', help='random seed') parser.add_argument('--gpu', default='3', type=str, help='gpu device ids for CUDA_VISIBLE_DEVICES') parser.add_argument('--savename', default='release_sysu_dart_nr20', type=str, help='name of the saved model') parser.add_argument('--mode', default='all', type=str, help='all or indoor') parser.add_argument('--augc', default=1, type=int, metavar='aug', help='use channel aug or not') parser.add_argument('--rande', default=0.5, type=float, metavar='ra', help='use random erasing or not and the probability') parser.add_argument('--kl', default=0, type=float, metavar='kl', help='use kl loss and the weight') parser.add_argument('--alpha', default=1, type=int, metavar='alpha', help='magnification for the hard mining') parser.add_argument('--gamma', default=1, type=int, metavar='gamma', help='gamma for the hard mining') parser.add_argument('--square', default=1, type=int, metavar='square', help='gamma for the hard mining') parser.add_argument('--noise-mode', default='sym', type=str, help='sym') parser.add_argument('--noise-rate', default=0.2, type=float, metavar='nr', help='noise_rate') parser.add_argument('--data-path', default='./dataset/SYSU-MM01/', type=str, help='path to dataset') parser.add_argument('--p-threshold', default=0.5, type=float, help='clean probability threshold') parser.add_argument('--warm-epoch', default=1, type=int, help='epochs for net warming up')
root@iao:/home/2022-CVPR-DART# python run.py --gpu 0 --dataset sysu --noise-rate 0.2 --savename sysu_dart_nr20
Args:Namespace(dataset='sysu', lr=0.1, optim='sgd', arch='resnet50', resume_net1='', resume_net2='', model_path='./save_model/', save_epoch=20, workers=4, img_w=144, img_h=288, batch_size=4, test_batch=64, method='robust', loss1='sid', loss2='robust_tri', margin=0.3, num_pos=4, trial=1, seed=0, gpu='0', savename='sysu_dart_nr20', mode='all', augc=1, rande=0.5, kl=0, alpha=1, gamma=1, square=1, noise_mode='sym', noise_rate=0.2, data_path='./dataset/SYSU-MM01/', p_threshold=0.5, warm_epoch=1, drop_last=True)
==> Loading data..
train with 0.2 noisy rates loading files and idx of noisy labels Dataset sysu statistics:
subset | # ids | # images
visible | 395 | 22258
thermal | 395 | 11909
query | 96 | 3803
gallery | 96 | 301 Data Loading Time: 34.902 ==> Building model.. train with 0.2 noisy rates loading files and idx of noisy labels train with 0.2 noisy rates loading files and idx of noisy labels ==> Start Training... ==> Preparing Data Loader... Warmup Net1 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 1/1393] CE-loss: 5.9782 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 51/1393] CE-loss: 5.4853 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[101/1393] CE-loss: 6.1761 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[151/1393] CE-loss: 5.8664 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[201/1393] CE-loss: 4.9210 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[251/1393] CE-loss: 4.9776 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[301/1393] CE-loss: 4.7589 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[351/1393] CE-loss: 4.4946 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[401/1393] CE-loss: 4.7785 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[451/1393] CE-loss: 4.5223 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[501/1393] CE-loss: 3.3214 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[551/1393] CE-loss: 3.4842 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[601/1393] CE-loss: 3.0825 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[651/1393] CE-loss: 3.7328 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[701/1393] CE-loss: 3.3046 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[751/1393] CE-loss: 3.7783 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[801/1393] CE-loss: 3.6628 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[851/1393] CE-loss: 2.7544 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[901/1393] CE-loss: 2.0932 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[951/1393] CE-loss: 3.4492 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1001/1393] CE-loss: 2.9154 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1051/1393] CE-loss: 2.8697 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1101/1393] CE-loss: 3.4063 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1151/1393] CE-loss: 3.9795 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1201/1393] CE-loss: 2.0655 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1251/1393] CE-loss: 3.7501 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1301/1393] CE-loss: 4.2255 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1351/1393] CE-loss: 2.7369 Current-lr: 0.0100
Warmup Net2 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 1/1393] CE-loss: 5.9845 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[ 51/1393] CE-loss: 5.8238 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[101/1393] CE-loss: 5.7436 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[151/1393] CE-loss: 5.8638 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[201/1393] CE-loss: 4.6205 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[251/1393] CE-loss: 4.7066 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[301/1393] CE-loss: 5.2194 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[351/1393] CE-loss: 4.3184 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[401/1393] CE-loss: 4.4477 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[451/1393] CE-loss: 4.5132 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[501/1393] CE-loss: 3.3674 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[551/1393] CE-loss: 3.3225 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[601/1393] CE-loss: 3.1671 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[651/1393] CE-loss: 3.7594 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[701/1393] CE-loss: 3.4605 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[751/1393] CE-loss: 3.5731 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[801/1393] CE-loss: 3.6904 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[851/1393] CE-loss: 2.8062 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[901/1393] CE-loss: 2.0503 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[951/1393] CE-loss: 3.7040 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1001/1393] CE-loss: 2.5633 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1051/1393] CE-loss: 3.2262 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1101/1393] CE-loss: 3.4745 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1151/1393] CE-loss: 4.0567 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1201/1393] CE-loss: 2.3980 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1251/1393] CE-loss: 3.8579 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1301/1393] CE-loss: 4.0358 Current-lr: 0.0100 sysu:0.2-sym | Epoch [ 0/ 80] Iter[1351/1393] CE-loss: 2.9340 Current-lr: 0.0100
==> Preparing Data Loader... Traceback (most recent call last): File "/home/2022-CVPR-DART/run.py", line 504, in prob_A_V, prob_A_I = eval_train(net1, eval_loader, 'A') File "/home/2022-CVPR-DART/run.py", line 313, in eval_train losses_V_aug2[index_V[n1 + 32]] = loss1[n1 + 32] IndexError: index 32 is out of bounds for axis 0 with size 32
你好,你把noise-rate的值改成0.或者0.5再试一下呢?
你好,你把noise-rate的值改成0.或者0.5再试一下呢?
抱歉,经过修改后还是报原来的错误,我并未修改过主题代码,请问您觉得我该如何修改。
改成0.还是报错吗? 我在release之前都试过的,都没问题
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
把SYSU数据集下面自动生成的“0.2_”开头的.npy文件都删了,再重新运行一下呢?
好的,将在明早重新进行尝试,谢谢您。
---原始邮件--- 发件人: "Mouxing @.> 发送时间: 2023年3月30日(周四) 晚上10:32 收件人: @.>; 抄送: @.**@.>; 主题: Re: [XLearning-SCU/2022-CVPR-DART] 出了一个bug,请问是什么原因。 (Issue #3)
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
把SYSU数据集下面自动生成的“0.2_”开头的.npy文件都删了,再重新运行一下呢?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
把SYSU数据集下面自动生成的“0.2_”开头的.npy文件都删了,再重新运行一下呢?
您好,经过修改还是不行,可能是环境问题把,我看看论文也行,辛苦您了。
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
把SYSU数据集下面自动生成的“0.2_”开头的.npy文件都删了,再重新运行一下呢?
您好,经过修改还是不行,可能是环境问题把,我看看论文也行,辛苦您了。
warmup能跑起来,应该不是环境问题。我这边跑起来是没问题的。
改成0.还是报错吗? 我在release之前都试过的,都没问题
我重新下载代码运行之后还是如此问题,如果您那边没有问题的话,可能是我这边环境的问题吗?
把SYSU数据集下面自动生成的“0.2_”开头的.npy文件都删了,再重新运行一下呢?
您好,经过修改还是不行,可能是环境问题把,我看看论文也行,辛苦您了。
warmup能跑起来,应该不是环境问题。我这边跑起来是没问题的。
感谢回复,我自己再看看。
是不是修改过batchsize,我改成16就也报这个错,改回8就好了,还没看明白为啥
是不是修改过batchsize,我改成16就也报这个错,改回8就好了,还没看明白为啥
我用的是20系显卡,显存不够,用的batchsize是7.可能是这个问题吧
好像是run.py中313行的问题,你们试着把那行的32改成loader_batch试试呢? 之前不小心把32写成定值了,所以你们改batch_size就会报错
==> Preparing Data Loader... Traceback (most recent call last): File "/home/2022-CVPR-DART/run.py", line 504, in
prob_A_V, prob_A_I = eval_train(net1, eval_loader, 'A')
File "/home/2022-CVPR-DART/run.py", line 313, in eval_train
losses_V_aug2[index_V[n1 + 32]] = loss1[n1 + 32]
IndexError: index 32 is out of bounds for axis 0 with size 32