Closed 5nail000 closed 5 months ago
Тут нет дискуссий, поэтому задаю вопросы здесь 8))
В StableDiffusion широко распространенно обучение Lora-моделей https://huggingface.co/docs/diffusers/main/en/training/lora https://github.com/microsoft/LoRA/tree/main Возможно ли интегрировать подходы Anti-Exploration by Random Network Distillation в текущие алгоритмы обучения Lora-моделей и ускорить их?
There is no discussion here, so I'll ask questions here 8))
In StableDiffusion, training Lora models is widely used https://huggingface.co/docs/diffusers/main/en/training/lora https://github.com/microsoft/LoRA/tree/main Is it possible to integrate Anti-Exploration by Random Network Distillation approaches into current training algorithms of Lora-models and speed them up?
Hi, I don't think these topics are quite related to each other
Тут нет дискуссий, поэтому задаю вопросы здесь 8))
В StableDiffusion широко распространенно обучение Lora-моделей https://huggingface.co/docs/diffusers/main/en/training/lora https://github.com/microsoft/LoRA/tree/main Возможно ли интегрировать подходы Anti-Exploration by Random Network Distillation в текущие алгоритмы обучения Lora-моделей и ускорить их?
There is no discussion here, so I'll ask questions here 8))
In StableDiffusion, training Lora models is widely used https://huggingface.co/docs/diffusers/main/en/training/lora https://github.com/microsoft/LoRA/tree/main Is it possible to integrate Anti-Exploration by Random Network Distillation approaches into current training algorithms of Lora-models and speed them up?