immich-app / immich

High performance self-hosted photo and video management solution.
https://immich.app
GNU Affero General Public License v3.0
52.86k stars 2.81k forks source link

[BUG] Facial recognition issues #6414

Closed LIHAQ closed 10 months ago

LIHAQ commented 10 months ago

The bug

Install the latest version 1.92.1, but facial recognition and intelligent search are not working properly

[01/15/24 12:37:10] INFO Starting gunicorn 21.2.0
[01/15/24 12:37:10] INFO Listening at: http://0.0.0.0:3003 (8)
[01/15/24 12:37:10] INFO Using worker: app.config.CustomUvicornWorker
[01/15/24 12:37:10] INFO Booting worker with pid: 10
[01/15/24 12:37:24] INFO Created in-memory cache with unloading after 300s
of inactivity.
[01/15/24 12:37:24] INFO Initialized request thread pool with 2 threads.
[01/16/24 01:07:37] INFO Downloading facial recognition model 'buffalo_l'.
This may take a while.
[01/16/24 01:09:47] WARNING Failed to load facial-recognition model
'buffalo_l'.Clearing cache and retrying.
[01/16/24 01:09:47] INFO Downloading clip model 'ViT-B-32__openai'. This may take a while.
[01/16/24 01:09:47] WARNING Attempted to clear cache for model 'buffalo_l', but cache directory does not exist
[01/16/24 01:11:58] INFO Downloading clip model 'ViT-B-32openai'. This may take a while.
[01/16/24 01:11:58] WARNING Failed to load clip model
'ViT-B-32
openai'.Clearing cache and retrying.
[01/16/24 01:11:58] WARNING Attempted to clear cache for model
'ViT-B-32__openai', but cache directory does not
exist

The OS that Immich Server is running on

Ubuntu 20.04.1 LTS

Version of Immich Server

1.92.1

Version of Immich Mobile App

1.92.1

Platform with the issue

Your docker-compose.yml content

version: "3.8"

#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    command: [ "start.sh", "immich" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - 2283:3001
    depends_on:
      - redis
      - database
    restart: always

  immich-microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    extends:
      file: hwaccel.yml
      service: hwaccel
    command: [ "start.sh", "microservices" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    depends_on:
      - redis
      - database
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always

  redis:
    container_name: immich_redis
    image: redis:6.2-alpine@sha256:c5a607fb6e1bb15d32bbcf14db22787d19e428d59e31a5da67511b49bb0f1ccc
    restart: always

  database:
    container_name: immich_postgres
    image: tensorchord/pgvecto-rs:pg14-v0.1.11@sha256:0335a1a22f8c5dd1b697f14f079934f5152eaaa216c09b61e293be285491f8ee
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: always

volumes:
  pgdata:
  model-cache:

Your .env content

# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=./library

# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=release

# Connection secret for postgres. You should change it to a random password
DB_PASSWORD=postgres

# The values below this line do not need to be changed
###################################################################################
DB_HOSTNAME=immich_postgres
DB_USERNAME=postgres
DB_DATABASE_NAME=immich

REDIS_HOSTNAME=immich_redis

Reproduction steps

1.
2.
3.
...

Additional information

No response

mertalev commented 10 months ago

It's most likely failing to download the model; either it can't connect to the server or the connection is being refused. Models are hosted on Hugging Face, so if the server is based in China then it might explain the error.

If you have a way to connect to Hugging Face then you can download the model files here and add them to the model cache yourself. The ML service will use the files you added as long as they're in the right file structure. In particular, you need these two paths to have the model files:

LIHAQ commented 10 months ago

It's most likely failing to download the model; either it can't connect to the server or the connection is being refused. Models are hosted on Hugging Face, so if the server is based in China then it might explain the error.

If you have a way to connect to Hugging Face then you can download the model files here and add them to the model cache yourself. The ML service will use the files you added as long as they're in the right file structure. In particular, you need these two paths to have the model files:

  • /cache/facial-recognition/buffalo_l/detection/model.onnx
  • /cache/facial-recognition/buffalo_l/recognition/model.onnx

Thank you for your help. I'll go test it

LIHAQ commented 10 months ago

It's most likely failing to download the model; either it can't connect to the server or the connection is being refused. Models are hosted on Hugging Face, so if the server is based in China then it might explain the error.

If you have a way to connect to Hugging Face then you can download the model files here and add them to the model cache yourself. The ML service will use the files you added as long as they're in the right file structure. In particular, you need these two paths to have the model files:

  • /cache/facial-recognition/buffalo_l/detection/model.onnx
  • /cache/facial-recognition/buffalo_l/recognition/model.onnx

@mertalev I tested it, Buffalo_ I can manually download and use it in this path, but my ViT-B-32__ OpenAI is still not working properly. Can you tell me where to download and where to place it

mertalev commented 10 months ago

The default smart search model is available here. You need to download all of the files here and place them in /cache/clip/ViT-B-32__openai with the same file structure. So for instance /cache/clip/ViT-B-32__openai/config.json, /cache/clip/ViT-B-32__openai/textual/model.onnx, and all of the other files in the model link in that folder structure.

But I should mention that ViT-B-32__openai only understands searches in English. If you want to search in Chinese, then you will need to choose a model listed here instead. Most of these models have too many files to download manually, so you would need to either choose one that has fewer files (like nllb-clip-base-siglip__v1) or download through the Hugging Face download CLI (use the --local-dir flag to download to a particular folder). Once you've chosen a model, change the model name in the web app in Administration -> Machine Learning Settings -> Smart Search -> Model Name to that model's name and download the files of that model with the model's name as the folder name. For instance, if you chose nllb-clip-base-siglip__v1, the files would go in /cache/clip/nllb-clip-base-siglip__v1.