Open Maelstrom2014 opened 1 month ago
For now, the only fix is to delete all validation sampling nodes. :(
For validation sampling only, please allow us with 10-12GB VRAM GPUs to use a quantized model that consumes less VRAM like GGUF. Maybe add an optional pretrained flux model input to the Flux Train Validation Settings node. It's quite restrictive not to be able to follow the training evolution.
Thank you