Woolverine94 / biniou

a self-hosted webui for 30+ generative ai
GNU General Public License v3.0
484 stars 55 forks source link
animatediff audiocraft bark controlnet diffusers flux generative-ai gfpgan gradio huggingface insightface ip-adapter kandinsky llama-cpp-python photomaker real-esrgan stable-diffusion stable-diffusion-3 webui whisper

Home

biniou screenshot

biniou is a self-hosted webui for several kinds of GenAI (generative artificial intelligence). You can generate multimedia contents with AI and use a chatbot on your own computer, even without dedicated GPU and starting from 8GB RAM. Can work offline (once deployed and required models downloaded).

GNU/Linux [ OpenSUSE base | RHEL base | Debian base ] β€’ Windows β€’ macOS Intel (experimental) β€’ Docker
Documentation ❓ | Showroom πŸ–ΌοΈ


Updates

List of archived updates


Menu

β€’ Features
β€’ Prerequisites
β€’ Installation
    GNU/Linux
      OpenSUSE Leap 15.5 / OpenSUSE Tumbleweed
      Rocky 9.3 / Alma 9.3 / CentOS Stream 9 / Fedora 39
      Debian 12 / Ubuntu 22.04.3 / Ubuntu 24.04 / Linux Mint 21.2
    Windows 10 / Windows 11
    macOS Intel Homebrew install
    Dockerfile
β€’ CUDA support
β€’ How To Use
β€’ Good to know
β€’ Credits
β€’ License


Features


Prerequisites

Note : biniou supports Cuda or ROCm but does not require a dedicated GPU to run. You can install it in a virtual machine.


Installation

GNU/Linux

OpenSUSE Leap 15.5 / OpenSUSE Tumbleweed

One-click installer :
  1. Copy/paste and execute the following command in a terminal :
    sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-opensuse.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-opensuse.sh)

Rocky 9.3 / Alma 9.3 / CentOS Stream 9 / Fedora 39

One-click installer :
  1. Copy/paste and execute the following command in a terminal :
    sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-rhel.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-rhel.sh)

Debian 12 / Ubuntu 22.04.3 / Ubuntu 24.04 / Linux Mint 21.2+

One-click installer :
  1. Copy/paste and execute the following command in a terminal :
    sh <(curl https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-debian.sh || wget -O - https://raw.githubusercontent.com/Woolverine94/biniou/main/oci-debian.sh)
Manual installation :
  1. Install the pre-requisites as root :
apt install git pip python3 python3-venv gcc perl make ffmpeg openssl
  1. Clone this repository as user :
git clone https://github.com/Woolverine94/biniou.git
  1. Launch the installer :
cd ./biniou
./install.sh
  1. (optional, but highly recommended) Install TCMalloc as root to optimize memory management :
    apt install google-perftools

Windows 10 / Windows 11

Windows installation has more prerequisites than GNU/Linux one, and requires following softwares (which will be installed automatically) :

It's a lot of changes on your operating system, and this could potentially bring unwanted behaviors on your system, depending on which softwares are already installed on it.
⚠️ You should really make a backup of your system and datas before starting the installation process. ⚠️

OR

All the installation is automated, but Windows UAC will ask you for confirmation for each software installed during the "prerequisites" phase. You can avoid this by running the chosen installer as administrator.

⚠️ Since commit 8d2537b Windows users can now define a custom path for biniou directory, when installing with install_win.cmd ⚠️

Proceed as follow :

macOS Intel Homebrew install

⚠️ Homebrew install is theoretically compatible with macOS Intel, but has not been tested. Use at your own risk. Also note that biniou is currently incompatible with Apple silicon. Any feedback on this procedure through discussions or an issue ticket will be really appreciated. ⚠️

⚠️ Update 01/09/2024: Thanks to @lepicodon, there's a workaround for Apple Silicon's users : you can install biniou in a virtual machine using OrbStack. See this comment for explanations. ⚠️

  1. Install Homebrew for your operating system

  2. Install required homebrew "bottles" :

    brew install git python3 gcc gcc@11 perl make ffmpeg openssl
  3. Install python virtualenv :

    python3 -m pip install virtualenv
  4. Clone this repository as user :

    git clone https://github.com/Woolverine94/biniou.git
  5. Launch the installer :

    cd ./biniou
    ./install.sh

Dockerfile

These instructions assumes that you already have a configured and working docker environment.

  1. Create the docker image :
    docker build -t biniou https://github.com/Woolverine94/biniou.git

    or, for CUDA support :

docker build -t biniou https://raw.githubusercontent.com/Woolverine94/biniou/main/CUDA/Dockerfile
  1. Launch the container :

    docker run -it --restart=always -p 7860:7860 \
    -v biniou_outputs:/home/biniou/biniou/outputs \
    -v biniou_models:/home/biniou/biniou/models \
    -v biniou_cache:/home/biniou/.cache/huggingface \
    -v biniou_gfpgan:/home/biniou/biniou/gfpgan \
    biniou:latest

    or, for CUDA support :

docker run -it --gpus all --restart=always -p 7860:7860 \
-v biniou_outputs:/home/biniou/biniou/outputs \
-v biniou_models:/home/biniou/biniou/models \
-v biniou_cache:/home/biniou/.cache/huggingface \
-v biniou_gfpgan:/home/biniou/biniou/gfpgan \
biniou:latest
  1. Access the webui by the url :
    https://127.0.0.1:7860 or https://127.0.0.1:7860/?__theme=dark for dark theme (recommended)
    ... or replace 127.0.0.1 by ip of your container

Note : to save storage space, the previous container launch command defines common shared volumes for all biniou containers and ensure that the container auto-restart in case of OOM crash. Remove --restart and -v arguments if you didn't want these behaviors.


CUDA support

biniou is natively cpu-only, to ensure compatibility with a wide range of hardware, but you can easily activate CUDA support through Nvidia CUDA (if you have a functional CUDA 12.1 environment) or AMD ROCm (if you have a functional ROCm 5.6 environment) by selecting the type of optimization to activate (CPU, CUDA or ROCm for Linux), in the WebUI control module.

Currently, all modules except Chatbot, Llava and faceswap modules, could benefits from CUDA optimization.


How To Use

  1. Launch by executing from the biniou directory :
    • for GNU/Linux :
      cd /home/$USER/biniou
      ./webui.sh
    • for Windows :

Double-click webui.cmd in the biniou directory (C:\Users\%username%\biniou\). When asked by the UAC, configure the firewall according to your network type to authorize access to the webui >Note : First start could be very slow on Windows 11 (comparing to others OS). 2. **Access** the webui by the url :
[https://127.0.0.1:7860](https://127.0.0.1:7860) or [https://127.0.0.1:7860/?__theme=dark](https://127.0.0.1:7860/?__theme=dark) for dark theme (recommended)
You can also access biniou from any device (including smartphones) on the same LAN/Wifi network by replacing 127.0.0.1 in the url with biniou host ip address.
3. **Quit** by using the keyboard shortcut CTRL+C in the Terminal 4. **Update** this application (biniou + python virtual environment) by using the WebUI control updates options. --- ## Good to know * Most frequent cause of crash is not enough memory on the host. Symptom is biniou program closing and returning to/closing the terminal without specific error message. You can use biniou with 8GB RAM, but 16GB at least is recommended to avoid OOM (out of memory) error. * biniou use a lot of differents AI models, which requires a lot of space : if you want to use all the modules in biniou, you will need around 200GB of disk space only for the default model of each module. Models are downloaded on the first run of each module or when you select a new model in a module and generate content. Models are stored in the directory /models of the biniou installation. Unused models could be deleted to save some space. * ... consequently, you will need a fast internet access to download models. * A backup of every content generated is available inside the /outputs directory of the biniou folder. * biniou natively only rely on CPU for all operations. It use a specific CPU-only version of PyTorch. The result is a better compatibility with a wide range of hardware, but degraded performances. Depending on your hardware, expect slowness. See [here](#cuda-support) for Nvidia CUDA support and AMD ROCm experimental support (GNU/Linux only). * Defaults settings are selected to permit generation of contents on low-end computers, with the best ratio performance/quality. If you have a configuration above the minimal settings, you could try using other models, increasing media dimensions or duration, modifying inference parameters or other settings (like token merging for images) to obtain better quality contents. * biniou is licensed under GNU GPL3, but each model used in biniou has its own license. Please consult each model license to know what you can and cannot do with the models. For each model, you can find a link to the huggingface page of the model in the "About" section of the associated module. * Don't have too much expectations : biniou is in an early stage of development, and most open source software used in it are in development (some are still experimental). * Every biniou modules offers 2 accordions elements **About** and **Settings** : - **About** is a quick help feature that describes the module and gives instructions and tips on how to use it. - **Settings** is a panel setting specific to the module that lets you configure the generation parameters. --- ## Credits This application uses the following softwares and technologies : - [πŸ€— Huggingface](https://huggingface.co/) : Diffusers and Transformers libraries and almost all the generative models. - [Gradio](https://www.gradio.app/) : webUI - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) : python bindings for llama-cpp - [Llava](https://llava-vl.github.io/) - [BakLLava](https://github.com/SkunkworksAI/BakLLaVA) - [Microsoft GIT](https://github.com/microsoft/GenerativeImage2Text) : Image2text - [Whisper](https://openai.com/research/whisper) : speech2text - [nllb translation](https://ai.meta.com/research/no-language-left-behind/) : language translation - [Stable Diffusion](https://stability.ai/stable-diffusion) : txt2img, img2img, Image variation, inpaint, ControlNet, Text2Video-Zero, img2vid - [Kandinsky](https://github.com/ai-forever/Kandinsky-2) : txt2img - [Latent consistency models](https://github.com/luosiallen/latent-consistency-model) : txt2img - [PixArt-Alpha](https://pixart-alpha.github.io/) : PixArt-Alpha - [IP-Adapter](https://ip-adapter.github.io/) : IP-Adapter img2img - [Instruct pix2pix](https://www.timothybrooks.com/instruct-pix2pix) : pix2pix - [MagicMix](https://magicmix.github.io/) : MagicMix - [Fantasy Studio Paint by Example](https://github.com/Fantasy-Studio/Paint-by-Example) : paintbyex - [Controlnet Auxiliary models](https://github.com/patrickvonplaten/controlnet_aux) : preview models for ControlNet module - [IP-Adapter FaceID](https://huggingface.co/h94/IP-Adapter-FaceID) : Adapter model for Photobooth module - [Photomaker](https://huggingface.co/TencentARC/PhotoMaker) Adapter model for Photobooth module - [Insight Face](https://insightface.ai/) : faceswapping - [Real ESRGAN](https://github.com/xinntao/Real-ESRGAN) : upscaler - [GFPGAN](https://github.com/TencentARC/GFPGAN) : face restoration - [Audiocraft](https://audiocraft.metademolab.com/) : musicgen, musicgen melody, audiogen - [MusicLDM](https://musicldm.github.io/) : MusicLDM - [Harmonai](https://www.harmonai.org/) : harmonai - [Bark](https://github.com/suno-ai/bark) : text2speech - [Modelscope text-to-video-synthesis](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) : txt2vid - [AnimateLCM](https://animatelcm.github.io/) : txt2vid - [Open AI Shap-E](https://github.com/openai/shap-e) : txt2shape, img2shape - [compel](https://github.com/damian0815/compel) : Prompt enhancement for various `StableDiffusionPipeline`-based modules - [tomesd](https://github.com/dbolya/tomesd) : Token merging for various `StableDiffusionPipeline`-based modules - [Python](https://www.python.org/) - [PyTorch](https://pytorch.org/) - [Git](https://git-scm.com/) - [ffmpeg](https://ffmpeg.org/) ... and all their dependencies --- ## License GNU General Public License v3.0 --- > GitHub [@Woolverine94](https://github.com/Woolverine94)  ·