immich-app / immich

High performance self-hosted photo and video management solution.
https://immich.app
GNU Affero General Public License v3.0
50.27k stars 2.66k forks source link

Immich unreachable after upgrade to v1.106.3 #10266

Closed pietrouk closed 2 months ago

pietrouk commented 4 months ago

The bug

After upgrading to v1.106.3 and trying to connect to server via browser, I get

"Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later."

Steps to reproduce:

  1. I stopped all the containers except postgres.
  2. Connected to database and performed REINDEX DATABASE immich; successfully
  3. Removed all images, volumes and containers from Portainer
  4. Rebooted host
  5. created brand new immich-app folder and moved docker-compose and env file there
  6. docker compose pull && docker compose up -d

output:

docker compose pull && docker compose up -d
[+] Pulling 62/46
 ✔ immich-server Pulled                                                                                                                                                                                                            135.9s 
 ✔ immich-machine-learning Pulled                                                                                                                                                                                                  133.3s 
 ✔ redis Pulled                                                                                                                                                                                                                     49.2s 
 ✔ database Pulled                                                                                                                                                                                                                  76.1s 

[+] Running 5/5
 ✔ Network immich_default             Created                                                                                                                                                                                        0.1s 
 ✔ Container immich_redis             Started                                                                                                                                                                                        9.9s 
 ✔ Container immich_machine_learning  Started                                                                                                                                                                                       10.0s 
 ✔ Container immich_postgres          Started                                                                                                                                                                                       10.1s 
 ✔ Container immich_server            Started 
  1. docker compose ps
NAME                      IMAGE                                                                                                                  COMMAND                  SERVICE                   CREATED          STATUS                        PORTS
immich_machine_learning   ghcr.io/immich-app/immich-machine-learning:release                                                                     "tini -- ./start.sh"     immich-machine-learning   16 minutes ago   Up 16 minutes (healthy)       
immich_postgres           docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0   "docker-entrypoint.s…"   database                  16 minutes ago   Up 16 minutes (healthy)       5432/tcp
immich_redis              docker.io/redis:6.2-alpine@sha256:d6c2911ac51b289db208767581a5d154544f2b2fe4914ea5056443f62dc6e900                     "docker-entrypoint.s…"   redis                     16 minutes ago   Up 16 minutes (healthy)       6379/tcp
immich_server             ghcr.io/immich-app/immich-server:release                                                                               "tini -- /bin/bash s…"   immich-server             16 minutes ago   Up About a minute (healthy)   0.0.0.0:2283->3001/tcp, :::2283->3001/tcp
  1. docker container logs immich_server
    Detected CPU Cores: 4
    Starting api worker
    Starting microservices worker
    [Nest] 6  - 06/13/2024, 8:59:54 AM     LOG [Microservices:EventRepository] Initialized websocket server
    [Nest] 16  - 06/13/2024, 8:59:54 AM     LOG [Api:EventRepository] Initialized websocket server
    Migration "RemoveLibraryType1715804005643" failed, error: could not create unique index "UQ_assets_owner_checksum"
    QueryFailedError: could not create unique index "UQ_assets_owner_checksum"
    at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:219:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
    at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
    at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
    at async /usr/src/app/dist/services/database.service.js:133:17
    at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
    query: 'CREATE UNIQUE INDEX "UQ_assets_owner_checksum" ON "assets" ("ownerId", "checksum") WHERE "libraryId" IS NULL',
    parameters: undefined,
    driverError: error: could not create unique index "UQ_assets_owner_checksum"
      at /usr/src/app/node_modules/pg/lib/client.js:526:17
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      at async PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:184:25)
      at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
      at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
      at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
      at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
      at async /usr/src/app/dist/services/database.service.js:133:17
      at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
    length: 311,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key ("ownerId", checksum)=(f37f8176-816e-4570-b85b-ce768cd06f5b, \\x0caccd4e75440ca23668eb73a5649912ab19c7d6) is duplicated.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: 'parallel worker',
    schema: 'public',
    table: 'assets',
    column: undefined,
    dataType: undefined,
    constraint: 'UQ_assets_owner_checksum',
    file: 'tuplesort.c',
    line: '4297',
    routine: 'comparetup_index_btree'
    },
    length: 311,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key ("ownerId", checksum)=(f37f8176-816e-4570-b85b-ce768cd06f5b, \\x0caccd4e75440ca23668eb73a5649912ab19c7d6) is duplicated.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: 'parallel worker',
    schema: 'public',
    table: 'assets',
    column: undefined,
    dataType: undefined,
    constraint: 'UQ_assets_owner_checksum',
    file: 'tuplesort.c',
    line: '4297',
    routine: 'comparetup_index_btree'
    }
    microservices worker exited with code 1
    Detected CPU Cores: 4
    Starting api worker
    Starting microservices worker
  2. docker container logs immich_postgres
PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-06-13 08:59:27.886 UTC [1] LOG:  redirecting log output to logging collector process
2024-06-13 08:59:27.886 UTC [1] HINT:  Future log output will appear in directory "log".

The OS that Immich Server is running on

linux aarch64 Debian GNU/Linux 12 (bookworm)

Version of Immich Server

v1.106.3

Version of Immich Mobile App

v1.106.3

Platform with the issue

Your docker-compose.yml content

#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    # extends:
    #   file: hwaccel.transcoding.yml
    #   service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - 2283:3001
    depends_on:
      - redis
      - database
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
    #   file: hwaccel.ml.yml
    #   service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always

  redis:
    container_name: immich_redis
    image: docker.io/redis:6.2-alpine@sha256:d6c2911ac51b289db208767581a5d154544f2b2fe4914ea5056443f62dc6e900
    healthcheck:
      test: redis-cli ping || exit 1
    env_file:
      - .env
    restart: always

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    healthcheck:
      test: pg_isready --dbname='${DB_DATABASE_NAME}' || exit 1; Chksum="$$(psql --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
      interval: 5m
      start_interval: 30s
      start_period: 5m
    command: ["postgres", "-c" ,"shared_preload_libraries=vectors.so", "-c", 'search_path="$$user", public, vectors', "-c", "logging_collector=on", "-c", "max_wal_size=2GB", "-c", "shared_buffers=512MB", "-c", "wal_compression=on"]
    env_file:
      - .env
    restart: always

volumes:
  model-cache:

Your .env content

UPLOAD_LOCATION=/my/folder/photos
IMMICH_VERSION=release
DB_PASSWORD=pass
DB_HOSTNAME=immich_postgres
DB_USERNAME=postgres
DB_DATABASE_NAME=immich
REDIS_HOSTNAME=immich_redis
DB_DATA_LOCATION=/my/folder/db

Reproduction steps

See above

Relevant log output

See above

Additional information

No response

zackpollard commented 4 months ago

Hey, are you in our discord, might help facilitate faster discussion about this. If not, are you about to run some postgres commands for me and send the output if I send them to you here?

mluettermann commented 4 months ago

i face the same issue, with the same Setup

immich_server Logs

Detected CPU Cores: 4
Starting api worker
[Nest] 17  - 06/13/2024, 11:00:49 AM     LOG [Api:EventRepository] Initialized websocket server
Migration "RemoveLibraryType1715804005643" failed, error: could not create unique index "UQ_assets_owner_checksum"
QueryFailedError: could not create unique index "UQ_assets_owner_checksum"
    at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:219:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
    at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
    at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
    at async /usr/src/app/dist/services/database.service.js:133:17
    at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
  query: 'CREATE UNIQUE INDEX "UQ_assets_owner_checksum" ON "assets" ("ownerId", "checksum") WHERE "libraryId" IS NULL',
  parameters: undefined,
  driverError: error: could not create unique index "UQ_assets_owner_checksum"
      at /usr/src/app/node_modules/pg/lib/client.js:526:17
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      at async PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:184:25)
      at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
      at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
      at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
      at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
      at async /usr/src/app/dist/services/database.service.js:133:17
      at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
    length: 294,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key ("ownerId", checksum)=(5b99ca2b-7075-4ea7-b286-6bd021873d04, \\xe5ec8089226a8983e5d9e29a023223cec8ed5ccb) is duplicated.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: 'public',
    table: 'assets',
    column: undefined,
    dataType: undefined,
    constraint: 'UQ_assets_owner_checksum',
    file: 'tuplesort.c',
    line: '4297',
    routine: 'comparetup_index_btree'
  },
  length: 294,
  severity: 'ERROR',
  code: '23505',
  detail: 'Key ("ownerId", checksum)=(5b99ca2b-7075-4ea7-b286-6bd021873d04, \\xe5ec8089226a8983e5d9e29a023223cec8ed5ccb) is duplicated.',
  hint: undefined,
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: undefined,
  schema: 'public',
  table: 'assets',
  column: undefined,
  dataType: undefined,
  constraint: 'UQ_assets_owner_checksum',
  file: 'tuplesort.c',
  line: '4297',
  routine: 'comparetup_index_btree'
}
node:internal/process/promises:289
            triggerUncaughtException(err, true /* fromPromise */);
            ^

QueryFailedError: could not create unique index "UQ_assets_owner_checksum"
    at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:219:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
    at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
    at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
    at async /usr/src/app/dist/services/database.service.js:133:17
    at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
  query: 'CREATE UNIQUE INDEX "UQ_assets_owner_checksum" ON "assets" ("ownerId", "checksum") WHERE "libraryId" IS NULL',
  parameters: undefined,
  driverError: error: could not create unique index "UQ_assets_owner_checksum"
      at /usr/src/app/node_modules/pg/lib/client.js:526:17
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      at async PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:184:25)
      at async RemoveLibraryType1715804005643.up (/usr/src/app/dist/migrations/1715804005643-RemoveLibraryType.js:21:9)
      at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
      at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:265:35)
      at async DatabaseRepository.runMigrations (/usr/src/app/dist/repositories/database.repository.js:169:9)
      at async /usr/src/app/dist/services/database.service.js:133:17
      at async /usr/src/app/dist/repositories/database.repository.js:177:23 {
    length: 294,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key ("ownerId", checksum)=(5b99ca2b-7075-4ea7-b286-6bd021873d04, \\xe5ec8089226a8983e5d9e29a023223cec8ed5ccb) is duplicated.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: 'public',
    table: 'assets',
    column: undefined,
    dataType: undefined,
    constraint: 'UQ_assets_owner_checksum',
    file: 'tuplesort.c',
    line: '4297',
    routine: 'comparetup_index_btree'
  },
  length: 294,
  severity: 'ERROR',
  code: '23505',
  detail: 'Key ("ownerId", checksum)=(5b99ca2b-7075-4ea7-b286-6bd021873d04, \\xe5ec8089226a8983e5d9e29a023223cec8ed5ccb) is duplicated.',
  hint: undefined,
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: undefined,
  schema: 'public',
  table: 'assets',
  column: undefined,
  dataType: undefined,
  constraint: 'UQ_assets_owner_checksum',
  file: 'tuplesort.c',
  line: '4297',
  routine: 'comparetup_index_btree'
}

Node.js v20.14.0
api worker exited with code 1
alextran1502 commented 4 months ago

@mluettermann Can you connect to the database and perform REINDEX DATABASE immich;?

junder873 commented 4 months ago

I am experiencing this issue as well, and as mentioned in #10270, I have previously moved some photos from one user to another directly in the database.

I tried running REINDEX DATABASE immich; but that did not resolve the issue.

lhoursquentin commented 4 months ago

Ran into the same issue, the reindex did not help, and I had indeed messed with the db to migrate some pictures to another user.

I looked into what was causing the unique index creation failure using this query:

select a."originalPath" from assets a
join (
  select a."ownerId", a.checksum from assets a
  join libraries l on a."libraryId" = l.id
  where l.type = 'UPLOAD'
  group by a."ownerId", a.checksum having count(*) > 1
) s
on a.checksum = s.checksum
order by a.checksum;

This allowed me to figure out where the duplicated files were located.

I then double checked that the files on disk were indeed identical, then deleted the copy belonging to the incorrect user (in my case I had one in library/admin which was incorrect and one in library/<id of another user> for every duplicate).

Then proceeded to delete those rows in db as well, restarted immich and good to go.

(as usual, make backups before and proceed with caution when deleting anything from the db and from your disk)

eddmanoo commented 4 months ago

Start over whit new yml file and new config ... works for me :

toczekmj commented 4 months ago

Sorry to interrupt, but shouldn't we have gotten some official update tutorial, other than just 'recreate docker containers from new docker compose'? I am also dealing with some kind of issues, but in my case it is just about immich_server not exposing ports properly. I kindly insists on some official info about migration.

//edit: also if someone finds this usefull: if your immich_server is constantly restarting, try to remove the containers, perform docker compose pull and then compose one more time. In my case this fixed the issue.

alextran1502 commented 4 months ago

@toczekmj the majority of the updates went fine. These cases often related to previously manual database manipulation.

If you have issues, you can open a discussion thread and describe the problem you face for help

junder873 commented 4 months ago

I tried starting with a new yml file but that did not fix it for me. I would also rather not manually find all the photos that are duplicated since that is 65,000+ photos for me.

lhoursquentin commented 4 months ago

I would also rather not manually find all the photos that are duplicated since that is 65,000+ photos for me.

If they have the same checksum (which is effectively what is preventing the unique index creation to work) then they are supposed to be identical.

In my case I have 50k photos, and the query above only returned 400 matches (so 200 to delete), I checked a few by hand, then double checked the rest with a small shell script, in /tmp/duplicates I stored the result of the sql query, duplicates are ordered two by two:

while IFS= read -r f1; do
  IFS= read -r f2
  if ! cmp "$f1" "$f2"; then
    echo "ERROR: files $f1 & $f2 are not duplicates" >&2
    break
  fi
done < /tmp/duplicates
alex-cardenas commented 4 months ago

Coming over from the other two issues (https://github.com/immich-app/immich/issues/10270 and https://github.com/immich-app/immich/issues/10186) as someone who did migrate one user's photos to another's library (I created two users for myself, one for each phone - having realized this wasn't best practice, hence the migration).

I ran @lhoursquentin's first SQL command and found 5,598 returned rows. Spot checked 6 of these (3 pairs) and they were indeed duplicates. Debating on the best way to move forward. Simply delete the ones from 'old' user?

Edit: @lhoursquentin, did your shell script return any that were not duplicates? Or were all of your 400 assets duplicates?

zackpollard commented 4 months ago

Hey all,

So for anyone experiencing this issue, you can use the following steps to first identify if this is the same issue that has been affecting others, and then use the other commands to recover from the problem, with some caveats.

Important

Testing if you have this exact issue

  1. Connect into your postgres container using docker exec -it immich_postgres psql immich postgres which will get you into an SQL terminal to run the following commands.
  2. Run the following SQL statement, if it outputs any entries, you have the above problem.
    SELECT count(*) as duplicate_count, checksum, "ownerId"
    FROM assets
    GROUP BY checksum, "ownerId"
    HAVING count(*) > 1
    ORDER BY duplicate_count DESC;
  3. If any of the entries return more than 2 for duplicate_count, the following fix will not work for you, or you will need to run the commands multiple times to get into a fixed state. I recommend contacting us on discord or here if you have a duplicate_count greater than 2, as it'll need some special attention to ensure you end up in a good state.

Fixing the problem

  1. Make sure you have read the steps in "Testing if you have this exact issue" before applying any of the below commands, to ensure this applies to you.
  2. Shut down your immich stack with docker compose down
  3. Start up your postgres server with docker start immich_postgres
  4. Connect into your postgres container using docker exec -it immich_postgres psql immich postgres which will get you into an SQL terminal to run the following commands.
  5. We have two options to remedy this problem, you can either create a new user and move duplicate asset entries to there, this will allow you to go through and ensure you are happy with deleting these assets, it'll also allow you to fully delete them, removing them from disk too. The second option is just straight deleting them from the database, which will orphan the files on disk, these will then show up in the untracked files section of the repair page, so you could manually clean them up later if you liked.

Option 1 (longer) - Moving duplicates to a new user

  1. Create the new user by duplicating one of your existing users using the following command and replacing the existinguser@email.com
    INSERT INTO users(email, password, "createdAt", "profileImagePath", "isAdmin", "shouldChangePassword", "deletedAt", "oauthId", "updatedAt", name, "quotaSizeInBytes", "quotaUsageInBytes", status)
    SELECT 'tempemail@example.com', password, "createdAt", "profileImagePath", "isAdmin", "shouldChangePassword", "deletedAt", "oauthId", "updatedAt", name, "quotaSizeInBytes", "quotaUsageInBytes", status FROM users WHERE email = 'existinguser@email.com';
  2. Get the id of the new user using the following command
    SELECT id, email FROM users WHERE email = 'tempemail@example.com';
  3. Run the following SQL command to move one of the two duplicates in each asset pair to that new user, replacing the NEW_USER_ID with the id you got above
    UPDATE assets SET "ownerId" = 'NEW_USER_ID' WHERE id IN (
    SELECT DISTINCT ON (checksum, "ownerId") id
    FROM assets
    WHERE id IN (
      SELECT id FROM assets
        WHERE (checksum, "ownerId") IN (
            SELECT checksum, "ownerId"
                FROM assets
                GROUP BY checksum, "ownerId"
                HAVING count(*) > 1)
        ORDER BY "libraryId"
    )
    );
  4. Start up your immich stack with docker compose up -d and the migrations should now all run and your server should start successfully!

Option 2 (shorter) - Deleting one set of the duplicates

  1. Run the following command, this will find all the duplicates, choose one from the group, and then remove it from the database (the original image files will still exist on disk)
    DELETE FROM assets WHERE id IN (
    SELECT DISTINCT ON (checksum, "ownerId") id
    FROM assets
    WHERE id IN (
      SELECT id FROM assets
        WHERE (checksum, "ownerId") IN (
            SELECT checksum, "ownerId"
                FROM assets
                GROUP BY checksum, "ownerId"
                HAVING count(*) > 1)
        ORDER BY "libraryId"
    )
    );

Issues

If you have any issues with the above process, start by posting a new command in this issue or making a thread in #help-desk-support in our discord, referencing this issue and posting the section you got to and the error you are coming up against.

Hopefully this helps rectify this issue for anyone facing it moving forwards! 😄

lhoursquentin commented 4 months ago

@lhoursquentin, did your shell script return any that were not duplicates? Or were all of your 400 assets duplicates?

All were duplicates in my case, this did not echo any error.

alex-cardenas commented 4 months ago

@zackpollard Thank you for the detailed notes! I was just reading up on the discord thread and trying to follow all the commands sent (especially the ones with or without ; haha)

Here is my testing results

Important

Testing

Ran this command using Adminer (a container to connect to postgres - especially since my Immich container is stopped). Results: 2,799 rows, each with a value of 2 in the duplicate_count column (which is exactly half of 5,598 of the rows returned from @lhoursquentin's SQL command.

Option 2 - Deleting one set of the duplicates

@zackpollard From which user will the duplicated images be deleted? Is there a way I can have it select one from the two?

zackpollard commented 4 months ago

@zackpollard From which user will the duplicated images be deleted? Is there a way I can have it select one from the two?

This problem occurs when there are two of the same asset within the same user, generally because there was manual manipulation to move assets between users previously, then they accidentally got uploaded again (only possible after this manual manipulation) so it'll take one of the two assets within the user and delete them from the database.

lhoursquentin commented 4 months ago

@zackpollard From which user will the duplicated images be deleted? Is there a way I can have it select one from the two?

This problem occurs when there are two of the same asset within the same user, generally because there was manual manipulation to move assets between users previously, then they accidentally got uploaded again (only possible after this manual manipulation) so it'll take one of the two assets within the user and delete them from the database.

For consistency I think it would make sense to delete the assets with the originalPath pointing to the incorrect user (this won't change anything now, but it seems more future proof)

alex-cardenas commented 4 months ago

@zackpollard When I was looking at @lhoursquentin's SQL

select a."originalPath" from assets a
join (
  select a."ownerId", a.checksum from assets a
  join libraries l on a."libraryId" = l.id
  where l.type = 'UPLOAD'
  group by a."ownerId", a.checksum having count(*) > 1
) s
on a.checksum = s.checksum
order by a.checksum;

I noticed that the duplicated assets were coming from the two user's library folders (one from each folder). I don't believe it's the case that all images were under one user's library.

Since I'm no longer using the 'old' user (id: 253b...), I'd prefer to delete the files from its path rather than the the 'current' user (id: 1ee4...). Does that make sense? And is that possible?

zackpollard commented 4 months ago

@zackpollard From which user will the duplicated images be deleted? Is there a way I can have it select one from the two?

This problem occurs when there are two of the same asset within the same user, generally because there was manual manipulation to move assets between users previously, then they accidentally got uploaded again (only possible after this manual manipulation) so it'll take one of the two assets within the user and delete them from the database.

For consistency I think it would make sense to delete the assets with the originalPath pointing to the incorrect user (this won't change anything now, but it seems more future proof)

After the server upgrade migrations run (the ones currently erroring and stopping you upgrading currently) the incorrect data will be removed from the database anyway, so it doesn't really matter.

zackpollard commented 4 months ago

@zackpollard When I was looking at @lhoursquentin's SQL

select a."originalPath" from assets a
join (
  select a."ownerId", a.checksum from assets a
  join libraries l on a."libraryId" = l.id
  where l.type = 'UPLOAD'
  group by a."ownerId", a.checksum having count(*) > 1
) s
on a.checksum = s.checksum
order by a.checksum;

I noticed that the duplicated assets were coming from the two user's library folders (one from each folder). I don't believe it's the case that all images were under one user's library.

Since I'm no longer using the 'old' user (id: 253b...), I'd prefer to delete the files from its path rather than the the 'current' user (id: 1ee4...). Does that make sense? And is that possible?

Without getting too deep into it, there are currently two references to the owner of a file, the upcoming migrations that are failing remove libraries of the upload type. So once you run the above commands and upgrade your server, there will no longer be incorrect references

alex-cardenas commented 4 months ago

I can run the command and start up my immich container and see if the issue is cleared. I do want to request a slight clarification:

Without getting too deep into it, there are currently two references to the owner of a file,

I wouldn't describe the issue as two owners pointing to a single file. I'd describe it as two separate files (different users, different paths, different file names) with identical checksums (i.e. the image/video contents are the same). Perhaps we are saying the same thing just in different ways? My understanding of dbs is fairly limited

zackpollard commented 4 months ago

I can run the command and start up my immich container and see if the issue is cleared. I do want to request a slight clarification:

Without getting too deep into it, there are currently two references to the owner of a file,

I wouldn't describe the issue as two owners pointing to a single file. I'd describe it as two separate files (different users, different paths, different file names) with identical checksums (i.e. the image/video contents are the same). Perhaps we are saying the same thing just in different ways? My understanding of dbs is fairly limited

Basically this situation should never have been able to happen. When you moved the assets, you likely changed the ownerId on the asset but not the ownerId of the library. This circumvented our checks that forbid two assets with the same checksum to exist for the same user within their upload library, as that user essentially then had assets from two users upload libraries owned by them. In the latest update we are removing upload libraries entirely as they are unnecessary as every user should only ever have one. If you care about the file paths I would suggest that you follow the steps above and then enable (if not already enabled) the storage template system, then run the job for all assets which will correct the paths in the filesystem for all your assets based on your template

alex-cardenas commented 4 months ago

Will let you know how it goes!

lhoursquentin commented 4 months ago

Thanks for the clarifications!

I'm still unclear on the potential mismatch between the ownerId and the originalPath: What I had was basically this:

               ownerId                |                                           originalPath
--------------------------------------+---------------------------------------------------------------------------------------------------
 81cbe434-48e9-43bf-9949-89b009e153a5 | upload/library/81cbe434-48e9-43bf-9949-89b009e153a5/...dup
 81cbe434-48e9-43bf-9949-89b009e153a5 | upload/library/admin/...dup            <---------- mismatch

Where the admin ownerId is not 81cbe434-48e9-43bf-9949-89b009e153a5.

So I specically deleted the entries with the /admin path.

Assuming we delete at random the duplicate here, you may keep a mismatching originalPath and ownerId?

zackpollard commented 4 months ago

Thanks for the clarifications!

I'm still unclear on the potential mismatch between the ownerId and the originalPath: What I had was basically this:

               ownerId                |                                           originalPath
--------------------------------------+---------------------------------------------------------------------------------------------------
 81cbe434-48e9-43bf-9949-89b009e153a5 | upload/library/81cbe434-48e9-43bf-9949-89b009e153a5/...dup
 81cbe434-48e9-43bf-9949-89b009e153a5 | upload/library/admin/...dup            <---------- mismatch

Where the admin ownerId is not 81cbe434-48e9-43bf-9949-89b009e153a5.

So I specically deleted the entries with the /admin path.

Assuming we delete at random the duplicate here, you may keep a mismatching originalPath and ownerId?

Yes in theory that could happen, but it just simply doesn't matter, if you care, run the storage template migration again for all assets and it will fix it.

lhoursquentin commented 4 months ago

Yes in theory that could happen, but it just simply doesn't matter, if you care, run the storage template migration again for all users and it will fix it.

Sounds good, thanks a lot :+1:

junder873 commented 4 months ago

When I ran the fix, initially I ran into an issue that "QueryFailedError: relation "UQ_assets_owner_checksum" already exists". I had to follow the steps here to restore from an earlier database backup to get everything to work.

alex-cardenas commented 4 months ago

@zackpollard I ran the command, re-ran both test commands and they returned zero rows (assuming this means no more duplicates). Started immich container > no issues.

Currently running the Storage Template Migration job. Getting a ton of

Nest] 196  - 06/17/2024, 11:38:12 AM   ERROR [Microservices:StorageTemplateService] Asset 8d9d8ecd-c843-4405-8e0c-e349fc1c17b6 missing exif info, skipping storage template migration

Not sure if expected or not.

HarryKodden commented 4 months ago

I managed to get rid of duplicates and get immich starting up again by executing this query on the postgress database:

DELETE FROM assets
WHERE id IN
    (SELECT id
    FROM 
        (SELECT id,
         ROW_NUMBER() OVER( PARTITION BY "ownerId", "checksum"
        ORDER BY  id ) AS row_num
        FROM assets ) t
        WHERE t.row_num > 1 );

PS. Make a backup of your database in advance !!!

Moderator Note: This is a community suggested query, not one suggested by the Immich team.

zackpollard commented 4 months ago

Currently running the Storage Template Migration job. Getting a ton of\n\nNest] 196 - 06/17/2024, 11:38:12 AM ERROR [Microservices:StorageTemplateService] Asset 8d9d8ecd-c843-4405-8e0c-e349fc1c17b6 missing exif info, skipping storage template migration\nNot sure if expected or not.

You may meet to re-run exif extraction, but some assets just don't have exif and so we don't have the information needed to move it to the template location

jrasm91 commented 2 months ago

I believe this issue is resolved. If any user still faces this issue, they can follow the steps outlined above.