yinchunhui-ahu / Recurrent_Tensor_Factorization

This is the implementation of our paper "Recurrent Tensor Factorization for Time-aware Service Recommendation"
http://bigdata.ahu.edu.cn
4 stars 1 forks source link

TypeError: list indices must be integers or slices, not tuple #1

Closed ffffffff0 closed 1 week ago

ffffffff0 commented 4 years ago

When I run the PGRU model, the following error will appear。 The operating environment has been set in the readme。

error info:

Traceback (most recent call last): File "D:/Desktop/Recurrent_Tensor_Factorization/PGRU.py", line 211, in main() File "D:/Desktop/Recurrent_Tensor_Factorization/PGRU.py", line 58, in main model = PGRU(args, density) File "D:/Desktop/Recurrent_Tensor_Factorization/PGRU.py", line 97, in init self.model = self.load_model() File "D:/Desktop/Recurrent_Tensor_Factorization/PGRU.py", line 142, in load_model self.dropLayers) File "D:/Desktop/Recurrent_Tensor_Factorization/PGRU.py", line 189, in build_model gru_vector = layers([gru_vector, time_embedding]) File "D:\Desktop\Recurrent_Tensor_Factorization\venv\lib\site-packages\keras\layers\recurrent.py", line 519, in call output = super(RNN, self).call(full_input, kwargs) File "D:\Desktop\Recurrent_Tensor_Factorization\venv\lib\site-packages\keras\engine\topology.py", line 603, in call output = self.call(inputs, kwargs) File "D:\Desktop\Recurrent_Tensor_Factorization\venv\lib\site-packages\keras\layers\recurrent.py", line 1503, in call self.cell._generate_dropout_mask(inputs, training=training) File "D:\Desktop\Recurrent_Tensor_Factorization\venv\lib\site-packages\keras\layers\recurrent.py", line 1268, in _generate_dropout_mask ones = K.ones_like(K.squeeze(inputs[:, 0:1, :], axis=1)) TypeError: list indices must be integers or slices, not tuple

yinchunhui-ahu commented 4 years ago
请在argsparse中将参数设置成整型数字,不是一个元组。请参考以下实例: 
parser.add_argument('--gruLayers', default=[2048, 1, 1], type=list, help='Layers of MLP.')
parser.add_argument('--regLayers', default=[0., 0., 0.], type=list, help='Regularization.')
parser.add_argument('--dropLayers', default=[5e-1, 5e-1, 5e-1], type=list, help='Dropout.')
ffffffff0 commented 4 years ago

参数设置如下

parser = argparse.ArgumentParser(description="Parameter Settings")
parser.add_argument('--dataType', default='rt', type=str, help='Type of data:rt|tp.')
parser.add_argument('--shape', default=(142, 4500, 64), type=tuple, help='(UserNum,ItemNum,TimeNum).')
parser.add_argument('--parallel', default=False, type=bool, help='Whether to use multi-process.')
parser.add_argument('--density', default=list(np.arange(0.05, 21, 0.05)), type=list, help='Density of matrix.')
parser.add_argument('--epochNum', default=50, type=int, help='Numbers of epochs per run.')
parser.add_argument('--batchSize', default=2048, type=int, help='Size of a batch.')
parser.add_argument('--gruLayers', default=[2048, 1, 1], type=list, help='Layers of MLP.')
parser.add_argument('--regLayers', default=[0., 0., 0.], type=list, help='Regularization.')
parser.add_argument('--dropLayers', default=[5e-1, 5e-1, 5e-1], type=list, help='Dropout.')
parser.add_argument('--optimizer', default=Adam, type=str, help='The optimizer:Adam|Adamax|Nadam|Adagrad.')
parser.add_argument('--lr', default=1e-3, type=float, help='Learning rate of the model.')
parser.add_argument('--decay', default=0.0, type=float, help='Decay ratio for lr.')
parser.add_argument('--verbose', default=1, type=int, help='Iterations per evaluation.')
parser.add_argument('--store', default=True, type=bool, help='Whether to store the model and result.')
parser.add_argument('--dataPath', default='./Data/dataset#2/', type=str, help='Path to load data.')
parser.add_argument('--modelPath', default='./Model/', type=str, help='Path to save the model.')
parser.add_argument('--imagePath', default='./Image/', type=str, help='Path to save the image.')
parser.add_argument('--resultPath', default='./Result/', type=str, help='Path to save the result.')
args = parser.parse_args()

错误信息相同

yahaaaaa commented 3 years ago

请问这个问题解决了吗?我遇到了相同的问题。