I ran the following in PyKEEN w/ torch-max-mem-0.1.1
from pykeen.pipeline import pipeline
result = pipeline(
dataset="fb15k237",
dataset_kwargs=dict(
create_inverse_triples=True,
),
model='DistMult',
model_kwargs=dict(
embedding_dim=64,
),
)
and got the following warnings:
Memory utilization maximization is written for integer parameters, but the batch_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the slice_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the batch_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the slice_size is annotated as int; casting to int
This is a bit confusing since they are indeed ints. I wonder if the following code has some mismatch between string "int" and int the builtin
I ran the following in PyKEEN w/ torch-max-mem-0.1.1
and got the following warnings:
This is a bit confusing since they are indeed ints. I wonder if the following code has some mismatch between string
"int"
andint
the builtinhttps://github.com/mberr/torch-max-mem/blob/c2696838bee6e2cff0a91ae08f9930ca1218e76e/src/torch_max_mem/api.py#L130C24-L134