nnzhan / MTGNN

MIT License
762 stars 216 forks source link

Project dependencies may have API risk issues #36

Open PyDeps opened 1 year ago

PyDeps commented 1 year ago

Hi, In MTGNN, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

matplotlib==3.1.1
numpy==1.17.4
pandas==0.25.3
scipy==1.4.1
torch==1.2.0
scikit_learn==0.23.1

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project, The version constraint of dependency pandas can be changed to >=0.25.0,<=1.4.2. The version constraint of dependency scipy can be changed to >=0.12.0,<=1.7.3.

The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the pandas
pandas.read_hdf
The calling methods from the scipy
scipy.sparse.identity
scipy.sparse.coo_matrix
scipy.sparse.eye
scipy.sparse.csr_matrix
scipy.sparse.diags
The calling methods from the all methods
predict.data.cpu.numpy.mean
torch.eye
list
torch.sqrt
calculate_normalized_laplacian
predefined_A.to.to
train_time.append
torch.load.to
os.path.join
open
numpy.mean
self.tconv
masked_mse
torch.load.zero_grad
torch.arange
dv.view
torch.autograd.Variable
masked_mape
self.gc
generate_train_val_test
torch.nn.L1Loss.to
masked_mae
RuntimeError
acc.append
self.skip_convs.append
self.dilated_inception.super.__init__
self.end_conv_2
mean_g.Ytest.mean_p.predict.mean
float
numpy.power
dy_nconv
testx.transpose.transpose
numpy.zeros
criterion
torch.isnan
range
sum
print
self.residual_convs.append
scipy.sparse.csr_matrix
self.emb2
self.nconv
self.model
i.self.tconv
numpy.isnan
torch.LongTensor
numpy.sort
torch.rand_like
scipy.sparse.coo_matrix.dot
self.loss
F.dropout
rae.append
train_rmse.append
num_nodes.time_ind.np.tile.transpose
DataLoaderM
real.pred.masked_mae.item
numpy.float32.adj.d_mat.dot.astype.todense
scipy.sparse.identity
self.lin2
real.pred.masked_mape.item
scale.Y.scale.output.evaluateL2.item
d.np.power.flatten
nn.functional.pad
self.gc.transpose
abs
idx.size.idx.size.torch.zeros.to.scatter_
real.predict.util.masked_mape.item
numpy.argmin
trainer.Trainer.eval
numpy.sqrt
torch.Tensor
self.start_conv
testy.transpose.transpose
self.scaler.inverse_transform
torch.where.float
nn.ModuleList
self._normalized
torch.sigmoid
self.model.to
numpy.repeat
d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.astype
util.masked_rmse
str
engine.model.state_dict
torch.load.eval
trainer.Trainer.model
numpy.stack
locals
train
torch.nn.functional.relu.size
p.nelement
valid_rmse.append
torch.tanh.transpose
nnodes.nnodes.torch.randn.to
rowsum.np.power.flatten
torch.optim.Adagrad
self.num_nodes.torch.arange.to
self.norm.append
numpy.float32.L.astype.todense
scipy.sparse.coo_matrix.sum
device.nnodes.nnodes.torch.randn.to.nn.Parameter.to
torch.tanh
mixprop
self.mlp1
numpy.array.append
torch.cat.size
self.graph_constructor.super.__init__
idx.size
linear
data.std
math.sqrt
data_list.append
adj.d_mat.dot.astype
self._split
valid_loss.append
self.tconv.append
d_mat_inv_sqrt.adj.dot.transpose
evaluateL2
self.linear.super.__init__
self.lin1
torch.load.train
vcorr.append
ValueError
y.torch.Tensor.to
log.format
self.gconv2.append
index.correlation.mean
real.predict.util.masked_rmse.item
self.prop.super.__init__
masked_rmse
vmape.append
torch.load
enumerate
self.scale.torch.from_numpy.float
li.split.split
numpy.expand_dims
load_pickle
i.self.residual_convs
torch.softmax
trainer.Trainer
train_mape.append
load_adj
format
scipy.sparse.diags.dot
dataloader.torch.Tensor.to
F.relu
model
torch.load.parameters
real.pred.masked_rmse.item
trainy.transpose.transpose
corr.append
numpy.arange
test.data.cpu.numpy.mean
torch.squeeze.size
scipy.sparse.linalg.eigsh
self._makeOptimizer
torch.nn.MSELoss
dilated_inception
self.gconv1.append
val_time.append
self.LayerNorm.super.__init__
trainer.Optim
min
adj.size.torch.eye.to
generate_graph_seq2seq_io_data
self.model.train
torch.nn.init.zeros_
scipy.sparse.eye
vrmse.append
numpy.std
torch.nn.ModuleList
self.filter_convs.append
self.gate_convs.append
torch.nn.L1Loss
self.dy_nconv.super.__init__
numpy.stack.append
self.mixprop.super.__init__
realy.transpose.size
i.self.filter_convs
argparse.ArgumentParser
torch.nn.functional.relu
numpy.abs
torch.cat
torch.zeros_like
self.mlp2
adj.torch.rand_like.adj.topk
self.test.size
StandardScaler
torch.nn.Parameter
torch.nn.functional.relu.topk
trainer.Trainer.train
i.self.norm
torch.tensor
self.end_conv_2.size
self.graph_directed.super.__init__
numpy.maximum.reduce
nconv
numpy.max
criterion.backward
scale.Y.scale.output.evaluateL1.item
torch.squeeze.unsqueeze
numpy.ones
adj.sum.np.array.flatten
main
self.optimizer.step
scipy.sparse.csr_matrix.astype
evaluate
torch.save
self.optimizer.zero_grad
util.masked_mape
i.self.skip_convs
numpy.isinf
idx.size.idx.size.torch.zeros.to.fill_
net.gtnet
argparse.ArgumentParser.parse_args
torch.mm
torch.optim.Adadelta
pandas.read_hdf
numpy.loadtxt
self.model.parameters
predict.data.cpu.numpy
nn.Conv2d
X.to.to
self.loss.item
output.transpose.transpose
data.get_batches
self.end_conv_1
Y.to.to
_wrapper
self.mlp
vrae.append
vmae.append
torch.mean
tuple
torch.from_numpy
torch.where
i.self.gate_convs
predict.data.cpu.numpy.std
self.reset_parameters
X.transpose.transpose
torch.nn.functional.relu.sum
outputs.append
out.append
numpy.tile
self.scale.to
torch.nn.init.ones_
metric
criterion.item
scipy.sparse.diags
load_dataset.shuffle
scipy.sparse.coo_matrix
realy.transpose.transpose
test.data.cpu.numpy
s1.fill_
torch.zeros
df.index.values.astype
self.model.eval
torch.squeeze
adj.sum.view
torch.optim.Adam
self.graph_global.super.__init__
self.skip0
valid_mape.append
torch.nn.functional.layer_norm
torch.nn.functional.relu.transpose
numpy.concatenate
numpy.array.std
torch.set_num_threads
trainer.Optim.step
max
torch.device
d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.tocoo
self.loss.backward
net.gtnet.parameters
numpy.random.permutation
self.dilated_1D.super.__init__
preds.transpose.squeeze
graph_constructor
LayerNorm
scaler.inverse_transform
numpy.array
self.skipE
torch.unsqueeze
self.dy_mixprop.super.__init__
test.data.cpu.numpy.std
vacc.append
numpy.float32.d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.astype.todense
value.lower
nn.functional.pad.size
torch.optim.SGD
super
train_loss.append
self._batchify
predict.data.cpu
numpy.timedelta64
self.register_parameter
time.time
self.scale.expand
load_dataset
round
self.nconv.super.__init__
self.gtnet.super.__init__
torch.randn
test.data.cpu
DataLoaderS
torch.nn.Linear
torch.cat.append
numpy.sort.reshape
preds.transpose.transpose
torch.nn.utils.clip_grad_norm_
idx.size.idx.size.torch.zeros.to
d_mat_inv_sqrt.adj.dot.transpose.dot
numpy.savez_compressed
len
id.torch.tensor.to
normal_std
torch.abs
torch.nn.Embedding
self.emb1
i.self.gconv2
torch.randperm
torch.einsum
torch.no_grad
torch.nn.MSELoss.to
StandardScaler.transform
pickle.load
evaluateL1
torch.cat.contiguous
load_dataset.get_iterator
isinstance
engine.model.load_state_dict
int
li.split.strip
data.mean
trainx.transpose.transpose
his_loss.append
torch.nn.Conv2d
self.graph_undirected.super.__init__
numpy.load
i.self.gconv1
x.torch.Tensor.to
argparse.ArgumentParser.add_argument
data.scale.expand

@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.