immich-app / immich

High performance self-hosted photo and video management solution.
https://immich.app
GNU Affero General Public License v3.0
52.43k stars 2.78k forks source link

ERROR [ImmichMicroservices] [JobService] Unable to run job handler (smartSearch/smart-search): QueryFailedError: pgvecto.rs: The given vector is invalid for input #9871

Open Str1atum opened 5 months ago

Str1atum commented 5 months ago

The bug

Brand new docker stack and database, trying to add around 700 files from external library, machine learning enabled (see docker files):

Microservices brings up lots of errors à la:

[Nest] 7 - 05/29/2024, 10:14:59 PM ERROR [ImmichMicroservices] [JobService] Unable to run job handler (smartSearch/smart-search): QueryFailedError: pgvecto.rs: The given vector is invalid for input. ADVICE: Check if dimensions and scalar type of the vector is matched with the index. [Nest] 7 - 05/29/2024, 10:14:59 PM ERROR [ImmichMicroservices] [JobService] QueryFailedError: pgvecto.rs: The given vector is invalid for input. ADVICE: Check if dimensions and scalar type of the vector is matched with the index. at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:219:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async InsertQueryBuilder.execute (/usr/src/app/node_modules/typeorm/query-builder/InsertQueryBuilder.js:106:33) at async SearchRepository.upsert (/usr/src/app/dist/repositories/search.repository.js:188:9) at async SmartInfoService.handleEncodeClip (/usr/src/app/dist/services/smart-info.service.js:91:9) at async /usr/src/app/dist/services/job.service.js:145:36 at async Worker.processJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:394:28) at async Worker.retryIfFailed (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:581:24) [Nest] 7 - 05/29/2024, 10:14:59 PM ERROR [ImmichMicroservices] [JobService] Object: { "id": "307c7eaf-8ac9-4a7a-b909-7bee74d5aaf1", "source": "upload" }

The OS that Immich Server is running on

Ubuntu 24.04 LTS

Version of Immich Server

1.105.1

Version of Immich Mobile App

v.105

Platform with the issue

Your docker-compose.yml content

#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    command: ['start.sh', 'immich']
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
      - /mnt/somestorage:/diskstation-photos:ro
    env_file:
      - .env
    ports:
      - 2283:3001
    labels:
      - traefik.enable=true
      - traefik.http.routers.immich.rule=Host(`immich.mydomain.org`)
      - traefik.http.routers.immich.entrypoints=websecure
      - traefik.http.routers.immich.tls=true
      - traefik.http.routers.immich.tls.certresolver=letsencrypt
    depends_on:
      - redis
      - database
    restart: unless-stopped

  immich-microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/hardware-transcoding
      file: hwaccel.transcoding.yml
      service: quicksync # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
    command: ['start.sh', 'microservices']
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
      - /mnt/somestorage:/diskstation-photos:ro
    env_file:
      - .env
    depends_on:
      - redis
      - database
    restart: unless-stopped

  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-openvino
    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
      file: hwaccel.ml.yml
      service: openvino # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: unless-stopped

  redis:
    container_name: immich_redis
    image: registry.hub.docker.com/library/redis:6.2-alpine@sha256:84882e87b54734154586e5f8abd4dce69fe7311315e2fc6d67c29614c8de2672
    restart: unless-stopped

  database:
    container_name: immich_postgres
    image: registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    restart: always
    command: ["postgres", "-c" ,"shared_preload_libraries=vectors.so", "-c", 'search_path="$$user", public, vectors', "-c", "logging_collector=on", "-c", "max_wal_size=2GB", "-c", "shared_buffers=512MB", "-c", "wal_compression=on"]

volumes:
  model-cache:

networks:
  default:
    name: traefik
    external: true

version: "3.8"

# Configurations for hardware-accelerated transcoding

# If using Unraid or another platform that doesn't allow multiple Compose files,
# you can inline the config for a backend by copying its contents
# into the immich-microservices service in the docker-compose.yml file.

# See https://immich.app/docs/features/hardware-transcoding for more info on using hardware transcoding.

services:
  cpu: {}

  nvenc:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu
                - compute
                - video

  quicksync:
    devices:
      - /dev/dri:/dev/dri

  rkmpp:
    security_opt: # enables full access to /sys and /proc, still far better than privileged: true
      - systempaths=unconfined
      - apparmor=unconfined
    group_add:
      - video
    devices:
      - /dev/rga:/dev/rga
      - /dev/dri:/dev/dri
      - /dev/dma_heap:/dev/dma_heap
      - /dev/mpp_service:/dev/mpp_service
      #- /dev/mali0:/dev/mali0 # only required to enable OpenCL-accelerated HDR -> SDR tonemapping
    volumes:
      #- /etc/OpenCL:/etc/OpenCL:ro # only required to enable OpenCL-accelerated HDR -> SDR tonemapping
      #- /usr/lib/aarch64-linux-gnu/libmali.so.1:/usr/lib/aarch64-linux-gnu/libmali.so.1:ro # only required to enable OpenCL-accelerated HDR -> SDR tonemapping

  vaapi:
    devices:
      - /dev/dri:/dev/dri

  vaapi-wsl: # use this for VAAPI if you're running Immich in WSL2
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - /usr/lib/wsl:/usr/lib/wsl
    environment:
      - LD_LIBRARY_PATH=/usr/lib/wsl/lib
      - LIBVA_DRIVER_NAME=d3d12

version: "3.8"

# Configurations for hardware-accelerated machine learning

# If using Unraid or another platform that doesn't allow multiple Compose files,
# you can inline the config for a backend by copying its contents 
# into the immich-machine-learning service in the docker-compose.yml file.

# See https://immich.app/docs/features/ml-hardware-acceleration for info on usage.

services:
  armnn:
    devices:
      - /dev/mali0:/dev/mali0
    volumes:
      - /lib/firmware/mali_csffw.bin:/lib/firmware/mali_csffw.bin:ro # Mali firmware for your chipset (not always required depending on the driver)
      - /usr/lib/libmali.so:/usr/lib/libmali.so:ro # Mali driver for your chipset (always required)

  cpu: {}

  cuda:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu

  openvino:
    device_cgroup_rules:
      - "c 189:* rmw"
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - /dev/bus/usb:/dev/bus/usb
    environment:
      - NEOReadDebugKeys=1
      - OverrideGpuAddressSpace=48 

  openvino-wsl:
    devices:
      - /dev/dri:/dev/dri
      - /dev/dxg:/dev/dxg
    volumes:
      - /dev/bus/usb:/dev/bus/usb
      - /usr/lib/wsl:/usr/lib/wsl

Your .env content

# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=/immich-photos
# The location where your database files are stored
DB_DATA_LOCATION=/docker-storage/immich_db

# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
TZ=Europe/Berlin

# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=release

# Connection secret for postgres. You should change it to a random password
DB_PASSWORD=postgres

# The values below this line do not need to be changed
###################################################################################
DB_USERNAME=postgres
DB_DATABASE_NAME=immich

IMMICH_LOG_LEVEL=debug

Reproduction steps

adding files from external library in base setup, new database

Relevant log output

Screenshot 29 05 2024 um 22 22 04 PM Screenshot 29 05 2024 um 22 22 27 PM

Additional information

No response

yu-jingrui commented 3 months ago

I am encountering the same error in 1.111.0.

hunmac9 commented 3 months ago

Same error here. Smart search still seems to work, but possibly missing files?

cstmth commented 2 weeks ago

I am encountering the very same issues. I am running on a Synology DS224+ (Intel Celeron J4125, 18 GB RAM).

bdsoha commented 2 weeks ago

Same here, on version 119.1.

mertalev commented 2 weeks ago

Can all of you confirm that you're using non-default CLIP models? If so, when did you change the model? You can check the files in the machine learning service's /cache/clip folder for the date when the model was downloaded.

bdsoha commented 2 weeks ago
$ ls -alh /cache/clip
drwxrwxrwx 6 1024 users 4.0K May 19 21:52 ViT-B-32__openai