picobyte / stable-diffusion-webui-wd14-tagger

Labeling extension for Automatic1111's Web UI
540 stars 64 forks source link

Unloading ML-Danbooru, is it possible without webui reload? #17

Open coder168 opened 11 months ago

coder168 commented 11 months ago

When click the "Unload all interrogate models" button, the following errors occurred:

...
  File "D:\app2\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 25, in unload_interrogators
    if i.unload():
  File "D:\app2\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 136, in unload
    gr.collect()
AttributeError: module 'gradio' has no attribute 'collect'

After a quick check, this error was caused by a code change (in line 136, \tagger\interrogator.py) within an recent pull request merge - "Manually merged: Support ML-Danbooru https://github.com/picobyte/stable-diffusion-webui-wd14-tagger/pull/6, changes amended from CCRcmcpe's".

Please review the relevant code and fix it, currently tagger can not clean its loaded models up. Thanks. BTW, my webui env versions: version: v1.4.1  •  python: 3.10.8  •  torch: 2.0.0+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0

picobyte commented 11 months ago

ok, will check. I never touch that button. Thanks for reporting.

picobyte commented 11 months ago

Removing the line will remove the attribute error, and I'll push that as a quick fix, but then the model wont unload. this exact line was one of the open questions I had for @CCRcmcpe, who provided the original pull request and might have a better solution for unloading? The collect() call seems part of the unloading that is commented as dysfunctional, see this comment and properly unloading tensorflow models is not possible but maybe the MLDanbooruInterrogator implementation does not have this flaw?

coder168 commented 11 months ago

Thanks for your review and quick fix.

So with this quick fix 5cbf4f5, the button works for unloading all loaded models except the ML-Danbooru models? Is my understanding correct?

picobyte commented 11 months ago

Edit, actually it seems related to tensorflow only, so it's the models DeepDanbooruInterrogator and the experimental large_batch_interrogate.

Any of the others should unload properly. Note that with my implementation you re-read from the db.json, even after a shutdown and reload of stablediffusion webui, including the ui interface. and then the query will read from database. Or is that not enough on windows? It will allow you to retrieve/combine the former interrogation output without the model loading.

Let's add a notice upon unloading all button, or unloading a danbooru, exactly for this purpose: that the unloading of this model requires a reload of webui and reload-ui and the interrogation is retrievable from db without load

picobyte commented 11 months ago

On SO there is a solution mentioned:

from numba import cuda
cuda.select_device(0)
cuda.close()

But then the numba documentation is unclear how to reopen the same device again. And reading this SO someone mentions a .close() is unrecoverable, suggests a .reset(). more numba docs. where a reset() is listed.

I was thinking about

if use_cpu:
    import gc
else:
    from numba import cuda
...

class DeepDanbooruInterrogator(Interrogator):
    ...
    def unload(self) -> bool:
        unloaded = super().unload()

        if unloaded:
            if use_cpu:
                import tensorflow as tf
                tf.keras.backend.clear_session()
                gc.collect()
            else:
                device = cuda.get_current_device()
                device.reset()
        return unloaded

But I run on CPU.

coder168 commented 11 months ago

It is little weird. I run a quick test with the latest version, but VRAM usage does not seem change, only chunk of main memory get released after unloaded. I am using nvidia card.

WSH032 commented 11 months ago

Hi, you mentioned this https://github.com/toriato/stable-diffusion-webui-wd14-tagger/issues/33 refer to https://github.com/keras-team/keras/issues/2102

In my opinion, it's not able to release GPU memory by tf.keras.backend.clear_session()

I don‘t think numba is a good idea. Because it seems that numba requires Nvidia cuda's dependency

If you really want to release tensorflow memory, I will suggest using multiprocessing.Process to load model and release they by os


https://github.com/WSH032/image-deduplicate-cluster-webui/blob/da8b29cb997a33671477888c6d64c4f83bf0d41b/cluster_images.py#L70

picobyte commented 11 months ago

Thanks, this is why I placed it behind an experimental option in the settings. The Nvidia dependency for numba did occur to me, but at least for nvidia numba could be an option? AMD (ROCm) and cpu are the others, or do even drivers like nouveau or nv function? I'm also not exactly sure what configs do not release propperly, and trying to get an impression, and whether it is vram or ram.

thanks for the links, I'll do some more digging.

WSH032 commented 11 months ago

One thing to note is that some users do not install Nvidia cuda tookit, but use Torch cuda. But Nvidia cuda tooki is required by numba. That’s why I think numba is not a good idea.

In fact, any model related to tensorflow will encounter the problem of not being able to release GPU memory.

The reason why other models, such as WD14, can be released normally is because they are onnx models (check this ), and onnxruntime can release them properly.

gc.collection() is just an explicit garbage collection of python, which cannot release GPU memory.


toriato seems to have tried to release tensorflow memory as well, and I have tried it too, but this seems to be unsolvable. https://github.com/toriato/stable-diffusion-webui-wd14-tagger/issues/31


My solution

  1. Use tensorflow model in a subprocess, as I mentioned above.
  2. Convert tensorflow model to onnx model.

I usually prefer the second option, because onnx model is faster to start and run. The downside is that you can’t use the latest models, unless you convert new models in time after the model author releases a new version.

picobyte commented 11 months ago

Ah thanks again. I found tf2onnx, which seems to do exactly this.

WSH032 commented 11 months ago

You are welcome. I have a Colab notebook about tf2onnx that might be helpful, if you need it. But it is code commented in Chinese, and it needs some adjustments

https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/wd14_tf2onnx.ipynb

picobyte commented 11 months ago

One option I am considering is implementing a Settings -> Tagger -> checkbox: convert tensorflow models to onnx. There might be checks necessary for updates, maybe keep the tensorflow next to the onnx model; rerun if the tensorflow's mtime, or sha256 changes.