saharmor / dalle-playground

A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
MIT License
2.77k stars 596 forks source link

Cannot install flax because these package versions have conflicting dependencies. #65

Open pr1ntr opened 2 years ago

pr1ntr commented 2 years ago
ERROR: Cannot install flax because these package versions have conflicting dependencies.

The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    optax 0.1.0 depends on jaxlib>=0.1.37
    optax 0.0.91 depends on jaxlib>=0.1.37
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37

WIn 11 Python 3.10.5 PIP 22.0.4

Phildo commented 2 years ago

Same:

ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.

The conflict is caused by:
optax 0.1.2 depends on jaxlib>=0.1.37
optax 0.1.1 depends on jaxlib>=0.1.37
flax 0.5.0 depends on typing-extensions>=4.1.1
huggingface-hub 0.1.0 depends on typing-extensions
jax 0.3.0 depends on typing_extensions
optax 0.1.0 depends on typing-extensions~=3.10.0
flax 0.5.0 depends on typing-extensions>=4.1.1
huggingface-hub 0.1.0 depends on typing-extensions
jax 0.3.0 depends on typing_extensions
optax 0.0.91 depends on typing-extensions~=3.10.0
optax 0.0.9 depends on jaxlib>=0.1.37
optax 0.0.8 depends on jaxlib>=0.1.37
optax 0.0.6 depends on jaxlib>=0.1.37
optax 0.0.5 depends on jaxlib>=0.1.37
optax 0.0.3 depends on jaxlib>=0.1.37
optax 0.0.2 depends on jaxlib>=0.1.37
optax 0.0.1 depends on jaxlib>=0.1.37

To fix this you could try to:
1. loosen the range of package versions you've specified
3. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Win 10 Python 3.9.13 PIP 22.1.2

dmwyatt commented 2 years ago

I managed to get it installed in WSL2 instead of straight on Windows.

dmwyatt commented 2 years ago

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

Using mega model version takes ~30 seconds per image with Geforce GTX 1080, 32GB ram, Ryzen 5900X.

Phildo commented 2 years ago

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

installed WSL2, cuda toolkit, and cudnn. getting a bunch of errors booting up mega, the first of which being:

--> Starting DALL-E Server. This might take up to two minutes.                                                                                                                                                                                       Traceback (most recent call last):

File "/home/phildo/.local/lib/python3.8/site-packages/dalle_mini/model/utils.py", line 23, in from_pretrained
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3885, in download
File "/usr/lib/python3.8/multiprocessing/pool.py", line 364, in map
File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3979, in _download_file
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/apis/public.py", line 3355, in download
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/sdk/wandb_artifacts.py", line 912, in load_file
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/sdk/interface/artifacts.py", line 948, in helper
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
File "/home/phildo/.local/lib/python3.8/site-packages/wandb/util.py", line 1430, in fsync_open
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

(then a bunch more errors)

EDIT: nvm- problem was due to full hard drive 🤦 Leaving previous comment for others in similar situation.

However- I still can't get it to work on WSL2. Now I'm getting

--> Starting DALL-E Server. This might take up to two minutes.
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)

followed by an error Some of the weights of DalleBart were initialized in float16 precision from the model checkpoint at /tmp/tmpmx4i52le:, followed by a huge list of strings

Some of the weights of DalleBart were initialized in float16 precision from the model checkpoint at /tmp/tmpmx4i52le:
[('lm_head', 'kernel'), ('model', 'decoder', 'embed_positions', 'embedding'), ('model', 'decoder', 'embed_tokens', 'embedding'), ('model', 'decoder', 'final_ln', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'bias'), ('model', 'decoder', 'layernorm_embedding', 'scale'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_0', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'k_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'out_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'q_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'FlaxBartAttention_1', 'v_proj', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_0', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_1', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'Dense_2', 'kernel'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'LayerNorm_0', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'GLU_0', 'LayerNorm_1', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_0', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_1', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_1', 'scale'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_2', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_3', 'bias'), ('model', 'decoder', 'layers', 'FlaxBartDecoderLayers', 'LayerNorm_3', 'scale'), ('model', 'encoder', 'embed_positions', 'embedding'), ('model', 'encoder', 'embed_tokens', 'embedding'), ('model', 'encoder', 'final_ln', 'bias'), ('model', 'encoder', 'layernorm_embedding', 'bias'), ('model', 'encoder', 'layernorm_embedding', 'scale'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'k_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'out_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'q_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'FlaxBartAttention_0', 'v_proj', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_0', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_1', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'Dense_2', 'kernel'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'LayerNorm_0', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'GLU_0', 'LayerNorm_1', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_0', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_1', 'bias'), ('model', 'encoder', 'layers', 'FlaxBartEncoderLayers', 'LayerNorm_1', 'scale')]
You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this.
/home/phildo/jax/jax/_src/ops/scatter.py:87: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=float16 to dtype=float32. In future JAX releases this will result in an error.
  warnings.warn("scatter inputs have incompatible types: cannot safely cast "

Then it appears to hang, with no further input.

I'm able to load up the front end, but get the error "Error querying DALL-E service. Check your backend server logs." when trying to prompt anything.

I even followed this PR's instructions for installing it on WSL2 ( https://github.com/saharmor/dalle-playground/pull/44/commits/0e6cfb2d2e5680b01e700b18bbf538c94cc94f1c ), including building jax from source (which, warning: takes like an hour!).

Any advice @dmwyatt ? I'm on Win 10 with a GTX 1080 Ti (with a fresh default installation of WSL2)

SantinoPetrovic commented 2 years ago

Having the same issue:


ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.
The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.1.0 depends on typing-extensions~=3.10.0
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.0.91 depends on typing-extensions~=3.10.0
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37
Talkashie commented 2 years ago

Is there any solution to this? I'm also getting stuck here.

ERROR: Cannot install -r requirements.txt (line 4), flax and transformers because these package versions have conflicting dependencies.

The conflict is caused by:
    optax 0.1.2 depends on jaxlib>=0.1.37
    optax 0.1.1 depends on jaxlib>=0.1.37
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.1.0 depends on typing-extensions~=3.10.0
    flax 0.5.0 depends on typing-extensions>=4.1.1
    huggingface-hub 0.1.0 depends on typing-extensions
    jax 0.3.0 depends on typing_extensions
    optax 0.0.91 depends on typing-extensions~=3.10.0
    optax 0.0.9 depends on jaxlib>=0.1.37
    optax 0.0.8 depends on jaxlib>=0.1.37
    optax 0.0.6 depends on jaxlib>=0.1.37
    optax 0.0.5 depends on jaxlib>=0.1.37
    optax 0.0.3 depends on jaxlib>=0.1.37
    optax 0.0.2 depends on jaxlib>=0.1.37
    optax 0.0.1 depends on jaxlib>=0.1.37

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

EDIT: I seem to have fixed most of the issues by installing dependencies separately and creating a new environment.

karyeet commented 2 years ago

I recommend just installing on WSL2 if you wish to use windows. The install will complete without issue.

gadams999 commented 2 years ago

@karyeet I've got it working in WSL2 too, but realized I don't have access to all GPU RAM. Do you know what the system memory and GPU memory requirements are for Mega_full?

karyeet commented 2 years ago

@gadams999

Had to get cuda toolkit, cudnn, troubleshooting issues and stuff...but it's working great in WSL2.

Using mega model version takes ~30 seconds per image with Geforce GTX 1080, 32GB ram, Ryzen 5900X.

As dmwyatt points out, make sure you have the cuda toolkit and cudnn installed. Also make sure you have jax[cuda] installed.

I could only run mini on 3060 6gb, this post suggests you need at least 16gb of vram for mega_full.

Likely 8gb needed for mega

sameermahajan commented 1 year ago

Any resolution for this? I cannot install tensorflowjs on my windows due to this.

mnai01 commented 1 year ago

Any resolution for this? I cannot install tensorflowjs on my windows due to this.

Same exact problem when running pip install tensorflowjs

reidpat commented 1 year ago

I am sadly also having this error when trying to install tensorflowjs on my windows machine.

saharmor commented 1 year ago

@reidpat have you tried the solutions mentioned in this thread? If so, please paste the error you are getting

yhann0827 commented 9 months ago

im also having the same error. anyone got any issues to solve this?

AndreAlbu commented 7 months ago

pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_releases.html