immich-app / immich

High performance self-hosted photo and video management solution.
https://immich.app
GNU Affero General Public License v3.0
52.02k stars 2.76k forks source link

[BUG] Unable to run clip encoding pipeline: Repository not found #2404

Closed ShinasShaji closed 1 year ago

ShinasShaji commented 1 year ago

The bug

After doing a clean install of Immich v1.55.0 using docker-compose, I noticed that CLIP encoding was repeatedly failing for each image upload in the logs. I have attached the logs below:

immich_machine_learning  | INFO:     172.18.0.8:58006 - "POST /sentence-transformer/encode-image HTTP/1.1" 500 Internal Server Error
immich_machine_learning  | ERROR:    Exception in ASGI application
immich_machine_learning  | Traceback (most recent call last):
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
immich_machine_learning  |     response.raise_for_status()
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
immich_machine_learning  |     raise HTTPError(http_error_msg, response=self)
immich_machine_learning  | requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/sentence-transformers/clip-ViT-B-32
immich_machine_learning  | 
immich_machine_learning  | The above exception was the direct cause of the following exception:
immich_machine_learning  | 
immich_machine_learning  | Traceback (most recent call last):
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
immich_machine_learning  |     result = await app(  # type: ignore[func-returns-value]
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
immich_machine_learning  |     return await self.app(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
immich_machine_learning  |     await super().__call__(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
immich_machine_learning  |     await self.middleware_stack(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
immich_machine_learning  |     raise exc
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
immich_machine_learning  |     await self.app(scope, receive, _send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
immich_machine_learning  |     raise exc
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
immich_machine_learning  |     await self.app(scope, receive, sender)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
immich_machine_learning  |     raise e
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
immich_machine_learning  |     await self.app(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
immich_machine_learning  |     await route.handle(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
immich_machine_learning  |     await self.app(scope, receive, send)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
immich_machine_learning  |     response = await func(request)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/fastapi/routing.py", line 237, in app
immich_machine_learning  |     raw_response = await run_endpoint_function(
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/fastapi/routing.py", line 165, in run_endpoint_function
immich_machine_learning  |     return await run_in_threadpool(dependant.call, **values)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
immich_machine_learning  |     return await anyio.to_thread.run_sync(func, *args)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
immich_machine_learning  |     return await get_asynclib().run_sync_in_worker_thread(
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
immich_machine_learning  |     return await future
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
immich_machine_learning  |     result = context.run(func, *args)
immich_machine_learning  |   File "/usr/src/app/src/main.py", line 64, in clip_encode_image
immich_machine_learning  |     model = _get_model(clip_image_model)
immich_machine_learning  |   File "/usr/src/app/src/main.py", line 98, in _get_model
immich_machine_learning  |     _model_cache[key] = SentenceTransformer(model)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__
immich_machine_learning  |     snapshot_download(model_name_or_path,
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/sentence_transformers/util.py", line 442, in snapshot_download
immich_machine_learning  |     model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
immich_machine_learning  |     return fn(*args, **kwargs)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1604, in model_info
immich_machine_learning  |     hf_raise_for_status(r)
immich_machine_learning  |   File "/opt/venv/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status
immich_machine_learning  |     raise RepositoryNotFoundError(message, response) from e
immich_machine_learning  | huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6459fc33-5d366a945616ed7679841ed8)
immich_machine_learning  | 
immich_machine_learning  | Repository Not Found for url: https://huggingface.co/api/models/sentence-transformers/clip-ViT-B-32.
immich_machine_learning  | Please make sure you specified the correct `repo_id` and `repo_type`.
immich_machine_learning  | If you are trying to access a private or gated repo, make sure you are authenticated.
immich_machine_learning  | Invalid username or password.
immich_microservices     | [Nest] 1  - 05/09/2023, 7:54:25 AM   ERROR [SmartInfoService] Unable run clip encoding pipeline: e5a16c5a-6d19-45a9-a69f-8e15d07f86c4
immich_microservices     | Error: Request failed with status code 500
immich_microservices     |     at createError (/usr/src/app/node_modules/axios/lib/core/createError.js:16:15)
immich_microservices     |     at settle (/usr/src/app/node_modules/axios/lib/core/settle.js:17:12)
immich_microservices     |     at IncomingMessage.handleStreamEnd (/usr/src/app/node_modules/axios/lib/adapters/http.js:322:11)
immich_microservices     |     at IncomingMessage.emit (node:events:539:35)
immich_microservices     |     at endReadableNT (node:internal/streams/readable:1345:12)
immich_microservices     |     at processTicksAndRejections (node:internal/process/task_queues:83:21)

I'm not very well-versed in this but opening the api link does prompt me for a username and password.

I have only made very few modifications to the .env file.

I assume this is the CLIP model that Immich uses? openai/clip-vit-base-patch32

The OS that Immich Server is running on

Windows 11 Pro 22H2 (with Hyper-V)

Version of Immich Server

v1.55.0

Version of Immich Mobile App

Irrelevant?

Platform with the issue

Your docker-compose.yml content

version: "3.8"

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:release
    entrypoint: ["/bin/sh", "./start-server.sh"]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    depends_on:
      - redis
      - database
      - typesense
    restart: always

  immich-microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:release
    entrypoint: ["/bin/sh", "./start-microservices.sh"]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    depends_on:
      - redis
      - database
      - typesense
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:release
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - model-cache:/cache
    env_file:
      - .env
    restart: always

  immich-web:
    container_name: immich_web
    image: ghcr.io/immich-app/immich-web:release
    entrypoint: ["/bin/sh", "./entrypoint.sh"]
    env_file:
      - .env
    restart: always

  typesense:
    container_name: immich_typesense
    image: typesense/typesense:0.24.0
    environment:
      - TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
      - TYPESENSE_DATA_DIR=/data
    logging:
      driver: none
    volumes:
      - tsdata:/data
    restart: always

  redis:
    container_name: immich_redis
    image: redis:6.2
    restart: always

  database:
    container_name: immich_postgres
    image: postgres:14
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      PG_DATA: /var/lib/postgresql/data
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: always

  immich-proxy:
    container_name: immich_proxy
    image: ghcr.io/immich-app/immich-proxy:release
    environment:
      # Make sure these values get passed through from the env file
      - IMMICH_SERVER_URL
      - IMMICH_WEB_URL
    ports:
      - 2283:8080
    logging:
      driver: none
    depends_on:
      - immich-server
    restart: always

volumes:
  pgdata:
  model-cache:
  tsdata:

Your .env content

###################################################################################
# Database
###################################################################################

DB_HOSTNAME=immich_postgres
DB_USERNAME=databaseuser
DB_PASSWORD=databasepass
DB_DATABASE_NAME=immich

# Optional Database settings:
# DB_PORT=5432

###################################################################################
# Redis
###################################################################################

REDIS_HOSTNAME=immich_redis

# REDIS_URL will be used to pass custom options to ioredis.
# Example for Sentinel
# {"sentinels":[{"host":"redis-sentinel-node-0","port":26379},{"host":"redis-sentinel-node-1","port":26379},{"host":"redis-sentinel-node-2","port":26379}],"name":"redis-sentinel"}
# REDIS_URL=ioredis://eyJzZW50aW5lbHMiOlt7Imhvc3QiOiJyZWRpcy1zZW50aW5lbDEiLCJwb3J0IjoyNjM3OX0seyJob3N0IjoicmVkaXMtc2VudGluZWwyIiwicG9ydCI6MjYzNzl9XSwibmFtZSI6Im15bWFzdGVyIn0=

# Optional Redis settings:

# Note: these parameters are not automatically passed to the Redis Container
# to do so, please edit the docker-compose.yml file as well. Redis is not configured
# via environment variables, only redis.conf or the command line

# REDIS_PORT=6379
# REDIS_DBINDEX=0
# REDIS_USERNAME=
# REDIS_PASSWORD=
# REDIS_SOCKET=

###################################################################################
# Upload File Location
#
# This is the location where uploaded files are stored.
###################################################################################

UPLOAD_LOCATION="D:\Users\shina\Pictures\immich\immich-storage"

###################################################################################
# Typesense
###################################################################################
TYPESENSE_API_KEY=some-random-text-indeeed
# TYPESENSE_ENABLED=false
# TYPESENSE_URL uses base64 encoding for the nodes json.
# Example JSON that was used:
# [
#      { 'host': 'typesense-1.example.net', 'port': '443', 'protocol': 'https' },
#      { 'host': 'typesense-2.example.net', 'port': '443', 'protocol': 'https' },
#      { 'host': 'typesense-3.example.net', 'port': '443', 'protocol': 'https' },
#  ]
# TYPESENSE_URL=ha://WwogICAgeyAnaG9zdCc6ICd0eXBlc2Vuc2UtMS5leGFtcGxlLm5ldCcsICdwb3J0JzogJzQ0MycsICdwcm90b2NvbCc6ICdodHRwcycgfSwKICAgIHsgJ2hvc3QnOiAndHlwZXNlbnNlLTIuZXhhbXBsZS5uZXQnLCAncG9ydCc6ICc0NDMnLCAncHJvdG9jb2wnOiAnaHR0cHMnIH0sCiAgICB7ICdob3N0JzogJ3R5cGVzZW5zZS0zLmV4YW1wbGUubmV0JywgJ3BvcnQnOiAnNDQzJywgJ3Byb3RvY29sJzogJ2h0dHBzJyB9LApd

###################################################################################
# Reverse Geocoding
#
# Reverse geocoding is done locally which has a small impact on memory usage
# This memory usage can be altered by changing the REVERSE_GEOCODING_PRECISION variable
# This ranges from 0-3 with 3 being the most precise
# 3 - Cities > 500 population: ~200MB RAM
# 2 - Cities > 1000 population: ~150MB RAM
# 1 - Cities > 5000 population: ~80MB RAM
# 0 - Cities > 15000 population: ~40MB RAM
####################################################################################

# DISABLE_REVERSE_GEOCODING=false
# REVERSE_GEOCODING_PRECISION=3

####################################################################################
# WEB - Optional
#
# Custom message on the login page, should be written in HTML form.
# For example:
# PUBLIC_LOGIN_PAGE_MESSAGE="This is a demo instance of Immich.<br><br>Email: <i>demo@demo.de</i><br>Password: <i>demo</i>"
####################################################################################

PUBLIC_LOGIN_PAGE_MESSAGE=

####################################################################################
# Alternative Service Addresses - Optional
#
# This is an advanced feature for users who may be running their immich services on different hosts.
# It will not change which address or port that services bind to within their containers, but it will change where other services look for their peers.
# Note: immich-microservices is bound to 3002, but no references are made
####################################################################################

IMMICH_WEB_URL=http://immich-web:3000
IMMICH_SERVER_URL=http://immich-server:3001
IMMICH_MACHINE_LEARNING_URL=http://immich-machine-learning:3003

####################################################################################
# Alternative API's External Address - Optional
#
# This is an advanced feature used to control the public server endpoint returned to clients during Well-known discovery.
# You should only use this if you want mobile apps to access the immich API over a custom URL. Do not include trailing slash.
# NOTE: At this time, the web app will not be affected by this setting and will continue to use the relative path: /api
# Examples: http://localhost:3001, http://immich-api.example.com, etc
####################################################################################

#IMMICH_API_URL_EXTERNAL=http://localhost:3001

Reproduction steps

  1. Start with a clean install of Immich using docker-compose.
  2. Upload a photo - initially found this error here.
  3. Encode CLIP in administration/jobs - this again triggers this error.
  4. Search using Smart search - this again triggers this error, showing an Internal Server Error (500) in the web UI and the above error in the logs.

Additional information

Really happy with Immich so far, it's amazing! The global map is sweet, and I'm looking forward to face recognition!

martabal commented 1 year ago

There's an issue with sentence transformers. The default model for immich is clip-ViT-B-32.

Edit: devs are working on it https://twitter.com/huggingface/status/1655760648926642178

ShinasShaji commented 1 year ago

Awesome, I shall close this issue for now and await an update from Hugging Face.

ShinasShaji commented 1 year ago

On second thought, I'll keep this issue open until a fix, in case other Immich users are facing the same issue and wondering what's going on.

ShinasShaji commented 1 year ago

Models in Hugging Face Sentence Transformers have come online, and the model Immich uses is now accessible. CLIP encoding is working. Closing the issue.