Open goodproj13 opened 5 years ago
please paste all the output from your training run.. so we can see if you enabled CUDA, etc...
Did you solve this problem? I have the same issue ...
I didn't solve the problem and I ended up using other's. Sorry!
From: Xu Ma notifications@github.com Sent: Friday, April 12, 2019 11:26:30 AM To: jwyang/faster-rcnn.pytorch Cc: Dongfang Liu; Author Subject: Re: [jwyang/faster-rcnn.pytorch] TypeError: can't assign a numpy.int64 to a torch.FloatTensor (#452)
Did you solve this problem? I have the same issue ...
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/jwyang/faster-rcnn.pytorch/issues/452#issuecomment-482616857, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ALO7KPSBVIN5LS4PTJWHQJDPQCS2NANCNFSM4G25NUQQ.
Maybe I should try mmdetection or Detectron ...
Change original code to temp = torch.ones(batch_size)*target_ratio self.ratio_list_batch[left_idx:(right_idx+1)] = temp
@Weizhongjin
Change original code to temp = torch.ones(batch_size)*target_ratio self.ratio_list_batch[left_idx:(right_idx+1)] = temp
Doing the above fix throws this error:
**TypeError: mul() received an invalid combination of arguments - got (numpy.int64), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (numpy.int64)
* (float other)
didn't match because some of the arguments have invalid types: (numpy.int64)
**
Has anyone found a fix for this?
@EMCP
Called with args:
Namespace(batch_size=1, checkepoch=1, checkpoint=0, checkpoint_interval=10000, checksession=1, class_agnostic=False, cuda=True, dataset='hollywoodheads_scuta', disp_interval=100, lamda=0.1, large_scale=False, lr=0.002, lr_decay_gamma=0.1, lr_decay_step=6, mGPUs=True, max_epochs=5, net='vgg16', num_workers=0, optimizer='sgd', resume=False, save_dir='data/adaptation/experiments', session=1, start_epoch=1, use_tfboard=False)
loading our dataset...........
/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/lib/model/utils/config.py:376: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))
Using config:
{'ANCHOR_RATIOS': [0.5, 1, 2],
'ANCHOR_SCALES': [4, 8, 16, 32],
'CROP_RESIZE_WITH_MAX_POOL': False,
'CUDA': False,
'DATA_DIR': '/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data',
'DEDUP_BOXES': 0.0625,
'DSN_DIFF_WEIGHT': 100000,
'EPS': 1e-14,
'EXP_DIR': 'vgg16',
'FEAT_STRIDE': [16],
'GPU_ID': 0,
'MATLAB': 'matlab',
'MAX_NUM_GT_BOXES': 50,
'MOBILENET': {'DEPTH_MULTIPLIER': 1.0,
'FIXED_LAYERS': 5,
'REGU_DEPTH': False,
'WEIGHT_DECAY': 4e-05},
'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
'POOLING_MODE': 'align',
'POOLING_SIZE': 7,
'RESNET': {'FIXED_BLOCKS': 1, 'MAX_POOL': False},
'RNG_SEED': 3,
'ROOT_DIR': '/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch',
'TEST': {'BBOX_REG': True,
'HAS_RPN': True,
'MAX_SIZE': 1000,
'MODE': 'nms',
'NMS': 0.3,
'PROPOSAL_METHOD': 'gt',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 300,
'RPN_PRE_NMS_TOP_N': 6000,
'RPN_TOP_N': 5000,
'SCALES': [600],
'SVM': False},
'TRAIN': {'ASPECT_GROUPING': False,
'BATCH_SIZE': 256,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.0,
'BIAS_DECAY': False,
'BN_TRAIN': False,
'DISPLAY': 10,
'DOUBLE_BIAS': True,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'GAMMA': 0.1,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'LEARNING_RATE': 0.01,
'MAX_SIZE': 1000,
'MOMENTUM': 0.9,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 8,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [600],
'SNAPSHOT_ITERS': 5000,
'SNAPSHOT_KEPT': 3,
'SNAPSHOT_PREFIX': 'res101_faster_rcnn',
'STEPSIZE': [30000],
'SUMMARY_INTERVAL': 180,
'TRIM_HEIGHT': 600,
'TRIM_WIDTH': 600,
'TRUNCATED': False,
'USE_ALL_GT': True,
'USE_FLIPPED': True,
'USE_GT': False,
'WEIGHT_DECAY': 0.0005},
'USE_GPU_NMS': True}
Loaded dataset `hollywoodheads_scuta_2007_train_s` for training
Set proposal method: gt
Appending horizontally-flipped training examples...
hollywoodheads_scuta_2007_train_s gt roidb loaded from /export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data/cache/hollywoodheads_scuta_2007_train_s_gt_roidb.pkl
done
Preparing training data...
done
before filtering, there are 200 images...
after filtering, there are 200 images...
Source Train Size = 200
Loaded dataset `hollywoodheads_scuta_2007_train_t` for training
Set proposal method: gt
Appending horizontally-flipped training examples...
hollywoodheads_scuta_2007_train_t gt roidb loaded from /export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data/cache/hollywoodheads_scuta_2007_train_t_gt_roidb.pkl
done
Preparing training data...
done
before filtering, there are 200 images...
after filtering, there are 200 images...
Target Train Size = 200
source 200 target 200 roidb entries
Traceback (most recent call last):
File "da_trainval_net.py", line 257, in <module>
s_imdb.num_classes, training=True)
File "/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/lib/roi_da_data_layer/roibatchLoader.py", line 55, in __init__
self.ratio_list_batch[left_idx:(right_idx+1)] = target_ratio # trainset ratio list ,each batch is same number
TypeError: can't assign a numpy.int64 to a torch.FloatTensor
Can you possibly upgrade to PyTorch 1.0 ? I've used exclusively 1.x and had zero issues for months now
@EMCP I'm working on domain adaptation (https://github.com/tiancity-NJU/da-faster-rcnn-PyTorch) and it uses the PyTorch 0.4 version of this repo. :(
okay i am abroad from my deep learning rig, so I cannot test PyTorch .4 until mid June... but I would diff the two versions and ensure you've got any/all bug fixes pushed to the 1.x version... I never ran the Master branch so I cannot get a sense if it's been properly patched or not..
IMO , the repo owner should just do a release branch for older PyTorch versions and stick to the bleeding edge in Master root, instead of this 1.x off on the side strategy.
self.ratio_list_batch[left_idx:(right_idx+1)] = torch.tensor(target_ratio.astype(np.float64)) # trainset ratio list ,each batch is same number
This fixed the issue for 0.4.0
feel free to submit a PR and close @benedictflorance
Yeah, I've submitted one. https://github.com/jwyang/faster-rcnn.pytorch/pull/573
/lib/roi_data_layer/roibatchLoader.py line 52, target_ratio = 1 change to: target_ratio = np.array(1)
Solving this definitely, in this line, change:
ratio_large = 2 # largest ratio to preserve.
to:
ratio_large = 2.0 # largest ratio to preserve.
PyTorch is creating a tensor with these values and its type is inferred from the data. A value of 2
will be inferred as int
, thus, changing to a floating point value will fix it.
Note that this error only occurs when it falls in this if
statement. The below if
statement uses ratio = ratio_small
where ratio_small = 0.5
is already a floating point value as defined in the beginning of the function.
Guys, Did anyone know what caused this issue below? Many thanks in advance. Torch: 0.40 Python: 3.6
We use voc format of KITTI dataset. The only thing I changed for the original code from this repo is to change the picture format from "jpg" to "png" which is the format for KITTI dataset. When I run ''python trainvel_net.py", I got error below:
Traceback (most recent call last): File "trainval_net.py", line 209, in
imdb.num_classes, training=True)
File "/home/NewPartion/pycharm/faster-rcnn.pytorch/lib/roi_data_layer/roibatchLoader.py", line 54, in init
self.ratio_list_batch[left_idx:(right_idx+1)] = target_ratio
TypeError: can't assign a numpy.int64 to a torch.FloatTensor