Spandrel gives your project support for various PyTorch architectures meant for AI Super-Resolution, restoration, and inpainting. Based on the model support implemented in chaiNNer.
MIT License
102
stars
7
forks
source link
Auto-calculate total model bytes on model descriptor #260
I noticed when profiling chaiNNer we take up a little bit of time every upscale to sum up the total bytes of the model. This only actually needs to be done once (when the model is instantiated), and I figured I could just do it here so it's available on the model descriptor of every model.
I figure only accounting for fp32 here is fine -- if someone needs the fp16 amount they can just divide it by 2.
This has nothing to do in spandrel IMO. This issue is chainner specific, so why should spandrel change its API to solve this issue for chainner? I mean, this property could have just been a weak hashmap in chainner.
You just made every model load slower to make chainner a little faster. If anything, this should be a lazily computed property.
I noticed when profiling chaiNNer we take up a little bit of time every upscale to sum up the total bytes of the model. This only actually needs to be done once (when the model is instantiated), and I figured I could just do it here so it's available on the model descriptor of every model.
I figure only accounting for fp32 here is fine -- if someone needs the fp16 amount they can just divide it by 2.