VinAIResearch / Anti-DreamBooth

Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV 2023)
https://vinairesearch.github.io/Anti-DreamBooth/
GNU Affero General Public License v3.0
206 stars 17 forks source link

could not find version of module #5

Closed alidcast closed 1 year ago

alidcast commented 1 year ago

I know there are difficulties installing python dependencies, so apologies if this is not relevant to the project setup.

But I've followed the steps, activating an environment with conda and running pip install, but there are several modules not found, such as triton and pytorch. For example:

ERROR: Could not find a version that satisfies the requirement triton==2.0.0 (from versions: none)
ERROR: No matching distribution found for triton==2.0.0

I don't have enough experience with Python to know if this is specific to my environment. It's pointing to correct versions, python 3.9.0 and pip 23.0.1, both specific to the conda environment.

Please let me know if there's anything else that may be done. ChatGTP was letting me know I could try installing deps from other sources, but seems weird that the sources you guys provided wouldn't work.

Luvata commented 1 year ago

Sorry, the correct package version we used is 2.0.0.dev20221031 You can install it by running the following command

pip install triton==2.0.0.dev20221031

Please let us know if you have any further questions or issues.

alidcast commented 1 year ago

I'm still getting same issue on install, but may be something off with my setup. I'm looking into it.

What machines has this been tested on? I'm on macos.

thuanz123 commented 1 year ago

Hi, we think that triton is quite new so may be some newest version does not support on macos. You can try using the newest triton available on macos by using following command:

pip install triton
alidcast commented 1 year ago

FYI using conda install seems to be more reliable

conda install --file requirements.txt

adding conda-forge channel for dpes

conda config --add channels conda-forge

Renaming torch to pytorch (so dep could be found in conda).

There was no dep for latest of triton unfortunately. Though commenting that one out does show that all others install with above procedure.

thuanz123 commented 1 year ago

Yeah, we have just found out that triton only support Linux, so you can remove the triton from the requirements and try to see if our code still works. We would love to know your feedback, since we don't have any MacOS to test

alidcast commented 1 year ago

Ok, I'll try it out.

I don't see triton used in source code (besides vocab file); was it included as a peer dependency for something?

alidcast commented 1 year ago

so pip install variaton still causes errors on macos

Collecting diffusers==0.13.1
  Using cached diffusers-0.13.1-py3-none-any.whl (716 kB)
Collecting xformers==0.0.16
  Using cached xformers-0.0.16.tar.gz (7.3 MB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [6 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/private/var/folders/fs/c1pd5zg92839fbv4yxz_86zh0000gn/T/pip-install-fqys13dt/xformers_cefc12b49d5a46da9757bc95e4487c69/setup.py", line 23, in <module>
          import torch
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

I was able to install the deps with conda (adding conda-forge and pytorch as channels).

But then when running inference get error that CUDA is not setup. Have not been able to find appropriate install for cuda as ones found in anaconda page end up not being available.

Python deps can be quite frustrating!

Luvata commented 1 year ago

I'm sorry for not asking thoroughly about your system before providing the previous response. Our experiments were conducted using Linux with Nvidia A100. We used xformers' efficient attention for faster training and lower memory footprint on Nvidia GPU, and triton is a dependency for xformers.

One thing you can try is to run train_dreambooth_alone.sh without xformers by removing line 8 enable_xformers_memory_efficient_attention and let us know if it still works.

For comparison, it took around 10 minutes to run train_dreambooth_alone.sh using A100 with efficient attention enabled. However, given that finetuning Dreambooth (or training Anti-Dreambooth) is a computationally intensive task that requires significant computing resources, it may take much longer or even be unfeasible to run on less powerful devices. So we highly recommend running our code on a similar system, such as Linux with an Nvidia A6000 or A100, for optimal performance.

alidcast commented 1 year ago

Makes sense; thanks. I'll go ahead and close this issue and run it on proper system.