[X] My problem is not about such issue, otherwise I have tried changing the extension directory name from sd-webui-segment-anything to a1111-sd-webui-segment-anything
What happened?
I dont know if its just me but vram usage remains higher than before after the process is complete.
I have a 3050 laptop.
Before starting;
Vram: 1.4.
on process:
goes maximum usage which 4 gb vram for me.
after it finishes the mask;
vram stays 2.4 all the tıme
Steps to reproduce the problem
Start a process
What should have happened?
After the process is complete vram should be freed.
Commit where the problem happens
webui:
extension:
What browsers do you use to access the UI ?
No response
Command Line Arguments
the problem is not happening with --lowvram but i dont want to use it since it slow downs the image production. other than I only use --xformers.
Console logs
venv "C:\Users\Allah\Desktop\Server\Server Documentation\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Launching Web UI with arguments: --api --skip-torch-cuda-test --medvram --always-batch-cond-uncond --opt-sdp-attention --xformers
2023-09-16 18:25:10,110 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: C:\Users\Allah\Desktop\Server\Server Documentation\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-16 18:25:10,195 - ControlNet - INFO - ControlNet v1.1.410
Loading weights [fe06753eee] from C:\Users\Allah\Desktop\Server\Server Documentation\stable-diffusion-webui\models\Stable-diffusion\uberRealisticPornMerge_urpmv13Inpainting.safetensors
Creating model from config: C:\Users\Allah\Desktop\Server\Server Documentation\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 10.0s (prepare environment: 0.6s, import torch: 4.2s, import gradio: 0.8s, setup paths: 0.8s, initialize shared: 0.7s, other imports: 0.6s, setup codeformer: 0.1s, load scripts: 1.2s, create ui: 0.7s, gradio launch: 0.3s).
Applying attention optimization: xformers... done.
Model loaded in 3.9s (load weights from disk: 0.4s, create model: 0.3s, apply weights to model: 1.2s, calculate empty prompt: 1.9s).
SAM API /sam/sam-predict received request
Start SAM Processing
Using local groundingdino.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
final text_encoder_type: bert-base-uncased
C:\Users\Allah\Desktop\Server\Server Documentation\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:884: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Initializing SAM to cuda
Running SAM Inference (736, 736, 3)
SAM inference with 2 boxes, point prompts discarded
Creating output image
SAM API /sam/sam-predict finished with message: SAM inference with 2 boxes, point prompts discarded done.
SAM API /sam/sam-predict received request
Start SAM Processing
Using local groundingdino.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
Initializing SAM to cuda
Running SAM Inference (736, 736, 3)
SAM inference with 2 boxes, point prompts discarded
Creating output image
SAM API /sam/sam-predict finished with message: SAM inference with 2 boxes, point prompts discarded done.
Is there an existing issue for this?
Have you updated WebUI and this extension to the latest version?
Do you understand that you should read the 1st item of https://github.com/continue-revolution/sd-webui-segment-anything#faq if you cannot install GroundingDINO?
Do you understand that you should use the latest ControlNet extension and enable external control if you want SAM extension to control ControlNet?
Do you understand that you should read the 2nd item of https://github.com/continue-revolution/sd-webui-segment-anything#faq if you observe problems like AttributeError bool object has no attribute enabled and TypeError bool object is not subscriptable?
What happened?
I dont know if its just me but vram usage remains higher than before after the process is complete. I have a 3050 laptop.
Before starting; Vram: 1.4.
on process: goes maximum usage which 4 gb vram for me.
after it finishes the mask; vram stays 2.4 all the tıme
Steps to reproduce the problem
What should have happened?
After the process is complete vram should be freed.
Commit where the problem happens
webui: extension:
What browsers do you use to access the UI ?
No response
Command Line Arguments
Console logs
Additional information
No response