Right now the Cutie framework uses the ResNet model for its PixelEncoder and MaskEncoder.
The ResNet model weights are downloaded by the torch framework on demand to the users local torch cache.
This requires the user to be online at first time use else the processing will fail.
In cases using the Cutie framework where the user cannot be online this is an obstacle.
Therefore suggesting to add an option to the model configuration called 'resnet_model_path' and update to use this ResNet model path to the PixelEncoder and MaskEncoder, which forward down to the actual torch model_zoo.load_url call.
The suggested code changes will only take effect if the model configuration is actually added this new 'resnet_model_path'. If not present in model configuration, the ResNet models are downloaded as previously. So changes will not change any existing use.
If as user then like to use the Cutie framework offline, an andvance download of the ResNet model to a location of choice and a modification to the used model configuration is now possible.
Right now the Cutie framework uses the ResNet model for its PixelEncoder and MaskEncoder. The ResNet model weights are downloaded by the torch framework on demand to the users local torch cache. This requires the user to be online at first time use else the processing will fail.
In cases using the Cutie framework where the user cannot be online this is an obstacle.
Therefore suggesting to add an option to the model configuration called 'resnet_model_path' and update to use this ResNet model path to the PixelEncoder and MaskEncoder, which forward down to the actual torch model_zoo.load_url call.
The suggested code changes will only take effect if the model configuration is actually added this new 'resnet_model_path'. If not present in model configuration, the ResNet models are downloaded as previously. So changes will not change any existing use.
If as user then like to use the Cutie framework offline, an andvance download of the ResNet model to a location of choice and a modification to the used model configuration is now possible.