Closed Eboubaker closed 1 year ago
after finishing I can just delete the container& the image and reclaim the used space.
I have #491 open for this though at the moment it's still far from being a one liner.
@santisbon I tried to change the platform to amd64 but getting the following error on docker build, any idea why?
#8 8.333 - readline==8.1.2=h7f8727e_1 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
#8 8.333 - ruamel_yaml==0.15.100=py39h27cfd23_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
#8 8.333 - sqlite==3.38.2=hc218d9a_0 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
#8 8.333 - tk==8.6.11=h1ccaba5_0 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
#8 8.333 - xz==5.2.5=h7b6447c_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
#8 8.333 - yaml==0.2.5=h7b6447c_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
#8 8.333 - zlib==1.2.12=h7f8727e_1 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17']
#8 8.333
#8 8.333 Your installed version is: not available
#8 8.333
#8 8.333
------
executor failed running [/bin/bash -c bash anaconda.sh -b -u -p /anaconda && /anaconda/bin/conda init bash]: exit code: 1
here's the step i did:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O anaconda.sh && chmod +x anaconda.sh
TAG_STABLE_DIFFUSION="kencanak/stable-diffusion-amd"
PLATFORM="linux/amd64"
GITHUB_STABLE_DIFFUSION="-b development https://github.com/kencanak/InvokeAI.git stable-diffusion"
REQS_STABLE_DIFFUSION="requirements-lin-AMD.txt"
CONDA_SUBDIR="osx-64"
echo $TAG_STABLE_DIFFUSION echo $PLATFORM echo $GITHUB_STABLE_DIFFUSION echo $REQS_STABLE_DIFFUSION echo $CONDA_SUBDIR
3. run docker build.
context:
- i am trying to build and run docker image on mac book intel
- i did manage to run the app outside of docker using `https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh` conda version
not sure what am i missing here.
Due to #614 the version of the code now needed is from the original way of installing GFPGAN before the refactoring that took place (at least for now). That's why it uses the orig-gfpgan
branch in my fork. Looks like the one you're using comes from the latest version in development
.
I don't have an Intel Mac to test that specific scenario with your reqs file but I'm working on a cloud deployment option for Linux amd64 containers that should be similar to your use case (it will at least have the same chip architecture) and might give me more info. In the meantime I'd try grabbing the code from that orig-gfpgan
branch in my repo. Other things to try could be creating the environment with the conda .yaml instead of installing the reqs directly with the pip .txt file or seeing if unsetting CONDA_SUBDIR
helps since your container is on linux/amd64.
Due to #614 the version of the code now needed is from the original way of installing GFPGAN before the refactoring that took place (at least for now). That's why it uses the
orig-gfpgan
branch in my fork. Looks like the one you're using comes from the latest version indevelopment
.I don't have an Intel Mac to test that specific scenario with your reqs file but I'm working on a cloud deployment option for Linux amd64 containers that should be similar to your use case (it will at least have the same chip architecture) and might give me more info. In the meantime I'd try grabbing the code from that
orig-gfpgan
branch in my repo. Other things to try could be creating the environment with the conda .yaml instead of installing the reqs directly with the pip .txt file or seeing if unsettingCONDA_SUBDIR
helps since your container is on linux/amd64.
@santisbon ah, thanks for the explanation. Will try that.
I started updating the Docker Image as well as providing scripts to build and run the image, which can be found here. Currently I can only manage to get it running when ARCH is x86_64 (amd64), but not when building with ARCH=aarch64
. Any Ideas?
./build.sh
You are using these values:
volumename: sd_checkpoint
arch: x86_64
platform: Linux/x86_64
sd_checkpoint_link: ../models/ldm/stable-diffusion-v1/model.ckpt
sd_checkpoint: /Users/mauwii/stable-diffusion/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
conda_subdir: Linux-x86_64
github_invoke_ai: https://github.com/invoke-ai/InvokeAI.git
tag_invoke_ai: mauwii/stable-diffusion
Volume already exists
[+] Building 2511.3s (20/20) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.85kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 5.0s
=> [auth] library/ubuntu:pull token for registry-1.docker.io 0.0s
=> [get_miniconda 1/3] FROM docker.io/library/ubuntu@sha256:35fb073f9e56eb84041b0745cb714eff0f7b225ea9e024f703cab56aaa5c7720 6.2s
=> => resolve docker.io/library/ubuntu@sha256:35fb073f9e56eb84041b0745cb714eff0f7b225ea9e024f703cab56aaa5c7720 0.0s
=> => sha256:216c552ea5ba7b0e3f6e33624e129981c39996021403518019d19b8843c27cbc 1.46kB / 1.46kB 0.0s
=> => sha256:cf92e523b49ea3d1fae59f5f082437a5f96c244fda6697995920142ff31d59cf 30.43MB / 30.43MB 5.4s
=> => sha256:35fb073f9e56eb84041b0745cb714eff0f7b225ea9e024f703cab56aaa5c7720 1.42kB / 1.42kB 0.0s
=> => sha256:a8fe6fd30333dc60fc5306982a7c51385c2091af1e0ee887166b40a905691fd0 529B / 529B 0.0s
=> => extracting sha256:cf92e523b49ea3d1fae59f5f082437a5f96c244fda6697995920142ff31d59cf 0.6s
=> [internal] load build context 0.0s
=> => transferring context: 208B 0.0s
=> [invokeai 2/12] RUN echo "" > ~/.bashrc 0.3s
=> [get_miniconda 2/3] RUN apt-get update && apt-get install -y wget && apt-get clean && rm -rf /var/lib/apt/lists/* 50.4s
=> [invokeai 3/12] RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends gcc git libgl1-mesa-glx libglib2.0-0 124.0s
=> [get_miniconda 3/3] RUN wget --progress=dot:giga -O /miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh && bash /miniconda.sh - 47.1s
=> [invokeai 4/12] RUN git clone https://github.com/invoke-ai/InvokeAI.git /invokeai 17.0s
=> [invokeai 5/12] COPY --from=get_miniconda /opt/miniconda /opt/miniconda 2.2s
=> [invokeai 6/12] RUN . /opt/miniconda/etc/profile.d/conda.sh && conda update conda && conda init bash 49.0s
=> [invokeai 7/12] WORKDIR /invokeai/models/ldm/stable-diffusion-v1 0.0s
=> [invokeai 8/12] RUN ln -s /data/sd-v1-4.ckpt ./model.ckpt 2.6s
=> [invokeai 9/12] WORKDIR /invokeai 0.0s
=> [invokeai 10/12] RUN conda config --add channels conda-forge && PIP_EXISTS_ACTION="w" conda env create --name invokeai && echo "conda activate invokeai" 1749.2s
=> [invokeai 11/12] RUN ls -la /invokeai/models/ldm/stable-diffusion-v1 && python scripts/preload_models.py 519.7s
=> [invokeai 12/12] COPY entrypoint.sh / 0.0s
=> exporting to image 36.0s
=> => exporting layers 36.0s
=> => writing image sha256:685043e6a68d1d603fd190701ce2d60c768b0a4b619e3dab8246055e559de1d4 0.0s
=> => naming to docker.io/mauwii/stable-diffusion
When building with ARCH=aarch64
it fails at preloading modules:
=> ERROR [invokeai 12/13] RUN python3 scripts/preload_models.py 1.8s
------
> [invokeai 12/13] RUN python3 scripts/preload_models.py:
#18 1.633 Traceback (most recent call last):
#18 1.633 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1063, in _get_module
#18 1.633 return importlib.import_module("." + module_name, self.__name__)
#18 1.633 File "/opt/miniconda/envs/invokeai/lib/python3.9/importlib/__init__.py", line 127, in import_module
#18 1.633 return _bootstrap._gcd_import(name[level:], package, level)
#18 1.633 File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
#18 1.633 File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
#18 1.634 File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
#18 1.634 File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
#18 1.634 File "<frozen importlib._bootstrap_external>", line 850, in exec_module
#18 1.634 File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 27, in <module>
#18 1.634 from ...modeling_utils import PreTrainedModel
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/modeling_utils.py", line 78, in <module>
#18 1.634 from accelerate import __version__ as accelerate_version
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/accelerate/__init__.py", line 7, in <module>
#18 1.634 from .accelerator import Accelerator
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/accelerate/accelerator.py", line 33, in <module>
#18 1.634 from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/accelerate/tracking.py", line 32, in <module>
#18 1.634 from torch.utils import tensorboard
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/torch/utils/tensorboard/__init__.py", line 4, in <module>
#18 1.634 LooseVersion = distutils.version.LooseVersion
#18 1.634 AttributeError: module 'distutils' has no attribute 'version'
#18 1.634
#18 1.634 The above exception was the direct cause of the following exception:
#18 1.634
#18 1.634 Traceback (most recent call last):
#18 1.634 File "/invokeai/scripts/preload_models.py", line 6, in <module>
#18 1.634 from transformers import CLIPTokenizer, CLIPTextModel
#18 1.634 File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1054, in __getattr__
#18 1.634 value = getattr(module, name)
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1053, in __getattr__
#18 1.634 module = self._get_module(self._class_to_module[name])
#18 1.634 File "/opt/miniconda/envs/invokeai/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1065, in _get_module
#18 1.634 raise RuntimeError(
#18 1.634 RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
#18 1.634 module 'distutils' has no attribute 'version'
------
executor failed running [/bin/bash --login -c python3 scripts/preload_models.py]: exit code: 1
got it working now to install when ARCH=aarch64
, did not test amd64 with this yet
I'm not good on python or AI in general, I just want to try and make art with AI, the installation process is overwhelming for me, it would be nice if I could just copy 1 line on my terminal and start using the AI.