mberr / torch-max-mem

Decorators for maximizing memory utilization with PyTorch & CUDA
https://torch-max-mem.readthedocs.io/en/latest/
MIT License
14 stars 0 forks source link

Warnings for non-int types being emitted unexpectedly #12

Closed cthoyt closed 1 year ago

cthoyt commented 1 year ago

I ran the following in PyKEEN w/ torch-max-mem-0.1.1

from pykeen.pipeline import pipeline
result = pipeline(
    dataset="fb15k237",
    dataset_kwargs=dict(
        create_inverse_triples=True,
    ),
    model='DistMult',
    model_kwargs=dict(
        embedding_dim=64,
    ),
)

and got the following warnings:

Memory utilization maximization is written for integer parameters, but the batch_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the slice_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the batch_size is annotated as int; casting to int
Memory utilization maximization is written for integer parameters, but the slice_size is annotated as int; casting to int

This is a bit confusing since they are indeed ints. I wonder if the following code has some mismatch between string "int" and int the builtin

https://github.com/mberr/torch-max-mem/blob/c2696838bee6e2cff0a91ae08f9930ca1218e76e/src/torch_max_mem/api.py#L130C24-L134

mberr commented 1 year ago

Yes, I think you are right.

mberr commented 1 year ago

Should be fixed with #13