advimman / lama

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
https://advimman.github.io/lama-project/
Apache License 2.0
8.12k stars 861 forks source link

we do it on colab but torch can't install correctly #243

Open Z0Victor opened 1 year ago

Z0Victor commented 1 year ago

Hi, advimman, we do it on colab but torch can't install correctly,, ERROR: Could not find a version that satisfies the requirement torch==1.8.0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1) ERROR: No matching distribution found for torch==1.8.0

how to solve this problem? PLZ

Xinshuai-Lyu commented 1 year ago

Change to this:

@title Run this sell to set everything up

print('\n> Cloning the repo') !git clone https://github.com/advimman/lama.git

print('\n> Install dependencies') !pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 torchtext==0.15.2 !pip install -r lama/requirements.txt --quiet !pip install wget --quiet !pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 torchaudio==2.0.2 -f https://download.pytorch.org/whl/torch_stable.html --quiet

print('\n> Changing the dir to:') %cd /content/lama

print('\n> Download the model') !curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip !unzip big-lama.zip

print('>fixing opencv') !pip uninstall opencv-python-headless -y --quiet !pip install opencv-python-headless==4.7.0.72 --quiet

print('\n> Init mask-drawing code') import base64, os from IPython.display import HTML, Image from google.colab.output import eval_js from base64 import b64decode import matplotlib.pyplot as plt import numpy as np import wget from shutil import copyfile import shutil

canvas_html = """

"""

def draw(imgm, filename='drawing.png', w=400, h=200, line_width=1): display(HTML(canvas_html % (w, h, w,h, filename.split('.')[-1], imgm, line_width))) data = eval_js("data") binary = b64decode(data.split(',')[1]) with open(filename, 'wb') as f: f.write(binary)

Z0Victor commented 1 year ago

Change to this:

@title Run this sell to set everything up print('\n> Cloning the repo') !git clone https://github.com/advimman/lama.git

print('\n> Install dependencies') !pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 torchtext==0.15.2 !pip install -r lama/requirements.txt --quiet !pip install wget --quiet !pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 torchaudio==2.0.2 -f https://download.pytorch.org/whl/torch_stable.html --quiet

print('\n> Changing the dir to:') %cd /content/lama

print('\n> Download the model') !curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip !unzip big-lama.zip

print('>fixing opencv') !pip uninstall opencv-python-headless -y --quiet !pip install opencv-python-headless==4.7.0.72 --quiet

print('\n> Init mask-drawing code') import base64, os from IPython.display import HTML, Image from google.colab.output import eval_js from base64 import b64decode import matplotlib.pyplot as plt import numpy as np import wget from shutil import copyfile import shutil

canvas_html = """

Finish

"""

def draw(imgm, filename='drawing.png', w=400, h=200, line_width=1): display(HTML(canvas_html % (w, h, w,h, filename.split('.')[-1], imgm, line_width))) data = eval_js("data") binary = b64decode(data.split(',')[1]) with open(filename, 'wb') as f: f.write(binary)

ThankS! Lyu, but another problem showed up error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. Preparing metadata (pyproject.toml) ... error error: metadata-generation-failed

failed to build the wheel ,what's the reason,PLZ

Xinshuai-Lyu commented 1 year ago

Copy_of_LaMa_inpainting.ipynb.zip

  1. Download my colab
  2. edit /content/lama/saicinpainting/training/modules/fake_fakes.py, you need to change from kornia import SamplePadding to from kornia.constants import SamplePadding
  3. edit /content/lama/bin/predict.py, you need to change device = torch.device(predict_config.device) (line 43) to device = torch.device("cpu") WX20230802-004026