Open ericspod opened 9 months ago
Thank you @ericspod for creating this post! cc: @dbericat
I've solved this issue by adding this " map_location=torch.device('cpu')" on the model load state, i think if you attempt to use
"device": "$torch.device('cpu')",
in your inference.json it will maybe work.
configPath = "./models/yourmodelhere/configs/inference.yaml"
config = ConfigParser()
config.read_config(configPath)
model = config.get_parsed_content("network")
model.load_state_dict(torch.load(modelPath, map_location=torch.device('cpu')))
To avoid issues when running bundles in CPU mode like that encountered below, all bundle weights should be stored on CPU. The alternative solution is to ensure any
CheckpointLoader
objects used have amap_location
set to something which can be used without CUDA being present.For those that support CPU-only operations, some way of testing bundles without the presence of CUDA might be nice too.
Discussed in https://github.com/Project-MONAI/model-zoo/discussions/516