Open Kertnik opened 9 months ago
I too have this issue - I use subdirectories to keep my checkpoints in, if that makes any difference
I too have this issue - I use subdirectories to keep my checkpoints in, if that makes any difference
I managed to fix the issue.
You need to not only change location of a model, but rebuild engine afterwards.
As far as I could understand it, engine builds for a name
Yeah, the trt/model name is having the "/" replaced by underscores. My model "./default/realistic/juggernaut_reborn.safetensors" becomes "[TRT] default_realistic_juggernaut_reborn..."
I tried editting model.json and all the trt and onnx files to reflect "juggernaut_reborn..." and moved the checkpoint to the root checkpoint folder. I no longer received the NoneType error, but began seeing PyTorch fallback.
I think it would've worked though, as I'm diagnosing the PyTorch fallback message as being related to HighresFix. I'll test it again, later.
Yeah, the trt/model name is having the "/" replaced by underscores. My model "./default/realistic/juggernaut_reborn.safetensors" becomes "[TRT] default_realistic_juggernaut_reborn..."
I tried editting model.json and all the trt and onnx files to reflect "juggernaut_reborn..." and moved the checkpoint to the root checkpoint folder. I no longer received the NoneType error, but began seeing PyTorch fallback.
I think it would've worked though, as I'm diagnosing the PyTorch fallback message as being related to HighresFix. I'll test it again, later.
Any progress with this?
I'm seeing the same thing, my checkpoints are categorized by base model and the onnx and trt files (as well as the references in model.json) get the directories added to the name: XL\juggernautXL_v9Rdphoto2Lightning.safetensors
generates XL_juggernautXL_v9Rdphoto2Lightning.onnx
etc. and throws an error in model_manager.py:225
KeyError: 'juggernautXL_v9Rdphoto2Lightning'
.
Moving the model after exporting the engine doesn't help, but it does seem like I can move the checkpoint to the base directory, export the engine, and move it back. Quite a pain though, and seems like correcting the filename at export time wouldn't be that difficult.
I have installed TensorRT and compiled the engine without an issue, but for some reason, I get NoneType and KeyError. This behavior doesn't change if I'm using different models, different locations of models, vaes, or if I'm manually using the desired unet, also I tried this fix, and unfortunately, nothing changed.
As I understand it, this problem is related to model locations and names which can't find each other, so I'm also attaching model.json maybe it willl be useful. model.json
TensorRT isn't working as my performance went down from 6-7it/s to 2-3it/s