immich-app / immich

High performance self-hosted photo and video management solution.
https://immich.app
GNU Affero General Public License v3.0
49.39k stars 2.6k forks source link

after upgrade to 1.107.0 face detection doesn't work #10771

Closed rui-nar closed 3 months ago

rui-nar commented 3 months ago

The bug

After upgrading to 1.107.0 I'm getting the following error on launching a face detection job:

The OS that Immich Server is running on

Docker

Version of Immich Server

v1.107.0

Version of Immich Mobile App

Not using

Platform with the issue

Your docker-compose.yml content

version: "3.8"

#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
#    command: [ "start.sh", "immich" ]
    volumes:
      - ${LIBRARY_LOCATION}:/usr/src/app/upload/library
      - ${UPLOAD_LOCATION}:/usr/src/app/upload/upload
      - ${THUMBS_LOCATION}:/usr/src/app/upload/thumbs
      - ${PROFILE_LOCATION}:/usr/src/app/upload/profile
      - ${VIDEO_LOCATION}:/usr/src/app/upload/encoded-video
      - /etc/localtime:/etc/localtime:ro
      - /volumeUSB2/usbshare:/volumeUSB2/usbshare:ro
    env_file:
      - .env
    ports:
      - 2283:3001
    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/hardware-transcoding
      file: hwaccel.transcoding.yml 
      service: quicksync # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
#    privileged: true
    depends_on:
      - redis
      - database
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-openvino
#    image: ghcr.io/immich-app/immich-machine-learning:pr-10740-openvino
    privileged: true
    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
      file: hwaccel.ml.yml
      service: openvino # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - /volume1/docker/immich/cache:/cache
      - ${LIBRARY_LOCATION}:/usr/src/app/upload/library
      - ${UPLOAD_LOCATION}:/usr/src/app/upload/upload
      - ${THUMBS_LOCATION}:/usr/src/app/upload/thumbs
      - ${PROFILE_LOCATION}:/usr/src/app/upload/profile
      - ${VIDEO_LOCATION}:/usr/src/app/upload/encoded-video
    env_file:
      - .env
    restart: always

#  immich-web:
#    container_name: immich_web
#    image: ghcr.io/immich-app/immich-web:${IMMICH_VERSION:-release}
#    env_file:
#      - .env
#    ports:
#      - 3000:3000
#    restart: always

  redis:
    container_name: immich_redis
    labels:
      - "com.centurylinklabs.watchtower.enable=false"
    image: redis:7.2-alpine
    restart: always

  database:
    container_name: immich_postgres
    labels:
      - "com.centurylinklabs.watchtower.enable=false"
    image: tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      - /volume1/docker/immich/db:/var/lib/postgresql/data
    restart: always

volumes:
  pgdata:
  model-cache:

Your .env content

# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables
LOG_LEVEL=debug

# The location where your uploaded files are stored
#UPLOAD_LOCATION=/volumeUSB2/usbshare/ImmichUpload/

LIBRARY_LOCATION=/volumeUSB2/usbshare/ImmichLibrary/
THUMBS_LOCATION=/volume1/immich/Immich-Cache/thumbs/
UPLOAD_LOCATION=/volume1/immich/Immich-Cache/upload/
PROFILE_LOCATION=/volume1/immich/Immich-Cache/profile/
VIDEO_LOCATION=/volume1/immich/Immich-Cache/encoded-video/

# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=release

# Connection secret for postgres. You should change it to a random password
DB_PASSWORD=postgres

# The values below this line do not need to be changed
###################################################################################
DB_HOSTNAME=immich_postgres
DB_USERNAME=postgres
DB_PASSWORD=postgresdb
DB_DATABASE_NAME=immich

REDIS_HOSTNAME=immich_redis

Reproduction steps

1.go to jobs
2.click on "missing" under face detection

Relevant log output

[07/02/24 17:57:56] INFO     Starting gunicorn 22.0.0                           
[07/02/24 17:57:56] INFO     Listening at: http://[::]:3003 (9)                 
[07/02/24 17:57:56] INFO     Using worker: app.config.CustomUvicornWorker       
[07/02/24 17:57:56] INFO     Booting worker with pid: 10                        
[07/02/24 17:58:07] INFO     Started server process [10]                        
[07/02/24 17:58:07] INFO     Waiting for application startup.                   
[07/02/24 17:58:07] INFO     Created in-memory cache with unloading after 300s  
                             of inactivity.                                     
[07/02/24 17:58:07] INFO     Initialized request thread pool with 4 threads.    
[07/02/24 17:58:07] INFO     Application startup complete.                      
[07/02/24 17:58:59] INFO     Attempt #2 to load detection model 'buffalo_l' to  
                             memory                                             
[07/02/24 17:59:01] INFO     Setting execution providers to                     
                             ['OpenVINOExecutionProvider',                      
                             'CPUExecutionProvider'], in descending order of    
                             preference                                         
2024-07-02 17:59:02.526318357 [E:onnxruntime:, inference_session.cc:1985 Initialize] Encountered unknown exception in Initialize()
[07/02/24 17:59:02] ERROR    Exception in ASGI application                      

                             ╭─────── Traceback (most recent call last) ───────╮
                             │ /usr/src/app/main.py:151 in predict             │
                             │                                                 │
                             │   148 │   │   inputs = text                     │
                             │   149 │   else:                                 │
                             │   150 │   │   raise HTTPException(400, "Either  │
                             │ ❱ 151 │   response = await run_inference(inputs │
                             │   152 │   return ORJSONResponse(response)       │
                             │   153                                           │
                             │   154                                           │
                             │                                                 │
                             │ /usr/src/app/main.py:174 in run_inference       │
                             │                                                 │
                             │   171 │   │   response[entry["task"]] = output  │
                             │   172 │                                         │
                             │   173 │   without_deps, with_deps = entries     │
                             │ ❱ 174 │   await asyncio.gather(*[_run_inference │
                             │   175 │   if with_deps:                         │
                             │   176 │   │   await asyncio.gather(*[_run_infer │
                             │   177 │   if isinstance(payload, Image):        │
                             │                                                 │
                             │ /usr/src/app/main.py:168 in _run_inference      │
                             │                                                 │
                             │   165 │   │   │   except KeyError:              │
                             │   166 │   │   │   │   message = f"Task {entry[' │
                             │       output of {dep}"                          │
                             │   167 │   │   │   │   raise HTTPException(400,  │
                             │ ❱ 168 │   │   model = await load(model)         │
                             │   169 │   │   output = await run(model.predict, │
                             │   170 │   │   outputs[model.identity] = output  │
                             │   171 │   │   response[entry["task"]] = output  │
                             │                                                 │
                             │ /usr/src/app/main.py:202 in load                │
                             │                                                 │
                             │   199 │   │   return model                      │
                             │   200 │                                         │
                             │   201 │   try:                                  │
                             │ ❱ 202 │   │   return await run(_load, model)    │
                             │   203 │   except (OSError, InvalidProtobuf, Bad │
                             │   204 │   │   log.warning(f"Failed to load {mod │
                             │       '{model.model_name}'. Clearing cache.")   │
                             │   205 │   │   model.clear_cache()               │
                             │                                                 │
                             │ /usr/src/app/main.py:187 in run                 │
                             │                                                 │
                             │   184 │   if thread_pool is None:               │
                             │   185 │   │   return func(*args, **kwargs)      │
                             │   186 │   partial_func = partial(func, *args, * │
                             │ ❱ 187 │   return await asyncio.get_running_loop │
                             │   188                                           │
                             │   189                                           │
                             │   190 async def load(model: InferenceModel) ->  │
                             │                                                 │
                             │ /usr/lib/python3.10/concurrent/futures/thread.p │
                             │ y:58 in run                                     │
                             │                                                 │
                             │ /usr/src/app/main.py:198 in _load               │
                             │                                                 │
                             │   195 │   │   if model.load_attempts > 1:       │
                             │   196 │   │   │   raise HTTPException(500, f"Fa │
                             │   197 │   │   with lock:                        │
                             │ ❱ 198 │   │   │   model.load()                  │
                             │   199 │   │   return model                      │
                             │   200 │                                         │
                             │   201 │   try:                                  │
                             │                                                 │
                             │ /usr/src/app/models/base.py:53 in load          │
                             │                                                 │
                             │    50 │   │   self.download()                   │
                             │    51 │   │   attempt = f"Attempt #{self.load_a │
                             │       else "Loading"                            │
                             │    52 │   │   log.info(f"{attempt} {self.model_ │
                             │       '{self.model_name}' to memory")           │
                             │ ❱  53 │   │   self.session = self._load()       │
                             │    54 │   │   self.loaded = True                │
                             │    55 │                                         │
                             │    56 │   def predict(self, *inputs: Any, **mod │
                             │                                                 │
                             │ /usr/src/app/models/facial_recognition/detectio │
                             │ n.py:28 in _load                                │
                             │                                                 │
                             │   25 │   │   super().__init__(model_name, cache │
                             │   26 │                                          │
                             │   27 │   def _load(self) -> ModelSession:       │
                             │ ❱ 28 │   │   session = self._make_session(self. │
                             │   29 │   │   self.model = RetinaFace(session=se │
                             │   30 │   │   self.model.prepare(ctx_id=0, det_t │
                             │   31                                            │
                             │                                                 │
                             │ /usr/src/app/models/base.py:108 in              │
                             │ _make_session                                   │
                             │                                                 │
                             │   105 │   │   │   case ".armnn":                │
                             │   106 │   │   │   │   session: ModelSession = A │
                             │   107 │   │   │   case ".onnx":                 │
                             │ ❱ 108 │   │   │   │   session = OrtSession(mode │
                             │   109 │   │   │   case _:                       │
                             │   110 │   │   │   │   raise ValueError(f"Unsupp │
                             │   111 │   │   return session                    │
                             │                                                 │
                             │ /usr/src/app/sessions/ort.py:28 in __init__     │
                             │                                                 │
                             │    25 │   │   self.providers = providers if pro │
                             │    26 │   │   self.provider_options = provider_ │
                             │       self._provider_options_default            │
                             │    27 │   │   self.sess_options = sess_options  │
                             │       self._sess_options_default                │
                             │ ❱  28 │   │   self.session = ort.InferenceSessi │
                             │    29 │   │   │   self.model_path.as_posix(),   │
                             │    30 │   │   │   providers=self.providers,     │
                             │    31 │   │   │   provider_options=self.provide │
                             │                                                 │
                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │
                             │ ime/capi/onnxruntime_inference_collection.py:41 │
                             │ 9 in __init__                                   │
                             │                                                 │
                             │    416 │   │   disabled_optimizers = kwargs["di │
                             │        kwargs else None                         │
                             │    417 │   │                                    │
                             │    418 │   │   try:                             │
                             │ ❱  419 │   │   │   self._create_inference_sessi │
                             │        disabled_optimizers)                     │
                             │    420 │   │   except (ValueError, RuntimeError │
                             │    421 │   │   │   if self._enable_fallback:    │
                             │    422 │   │   │   │   try:                     │
                             │                                                 │
                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │
                             │ ime/capi/onnxruntime_inference_collection.py:48 │
                             │ 3 in _create_inference_session                  │
                             │                                                 │
                             │    480 │   │   │   disabled_optimizers = set(di │
                             │    481 │   │                                    │
                             │    482 │   │   # initialize the C++ InferenceSe │
                             │ ❱  483 │   │   sess.initialize_session(provider │
                             │    484 │   │                                    │
                             │    485 │   │   self._sess = sess                │
                             │    486 │   │   self._sess_options = self._sess. │
                             ╰─────────────────────────────────────────────────╯
                             RuntimeException: [ONNXRuntimeError] : 6 :         
                             RUNTIME_EXCEPTION : Encountered unknown exception  
                             in Initialize()                                    
[07/02/24 17:59:03] INFO     Attempt #3 to load detection model 'buffalo_l' to  
                             memory                                             
[07/02/24 17:59:03] INFO     Setting execution providers to                     
                             ['OpenVINOExecutionProvider',                      
                             'CPUExecutionProvider'], in descending order of    
                             preference                                         
2024-07-02 17:59:04.028049543 [E:onnxruntime:, inference_session.cc:1985 Initialize] Encountered unknown exception in Initialize()
[07/02/24 17:59:04] ERROR    Exception in ASGI application                      

                             ╭─────── Traceback (most recent call last) ───────╮
                             │ /usr/src/app/main.py:151 in predict             │
                             │                                                 │
                             │   148 │   │   inputs = text                     │
                             │   149 │   else:                                 │
                             │   150 │   │   raise HTTPException(400, "Either  │
                             │ ❱ 151 │   response = await run_inference(inputs │
                             │   152 │   return ORJSONResponse(response)       │
                             │   153                                           │
                             │   154                                           │
                             │                                                 │
                             │ /usr/src/app/main.py:174 in run_inference       │
                             │                                                 │
                             │   171 │   │   response[entry["task"]] = output  │
                             │   172 │                                         │
                             │   173 │   without_deps, with_deps = entries     │
                             │ ❱ 174 │   await asyncio.gather(*[_run_inference │
                             │   175 │   if with_deps:                         │
                             │   176 │   │   await asyncio.gather(*[_run_infer │
                             │   177 │   if isinstance(payload, Image):        │
                             │                                                 │
                             │ /usr/src/app/main.py:168 in _run_inference      │
                             │                                                 │
                             │   165 │   │   │   except KeyError:              │
                             │   166 │   │   │   │   message = f"Task {entry[' │
                             │       output of {dep}"                          │
                             │   167 │   │   │   │   raise HTTPException(400,  │
                             │ ❱ 168 │   │   model = await load(model)         │
                             │   169 │   │   output = await run(model.predict, │
                             │   170 │   │   outputs[model.identity] = output  │
                             │   171 │   │   response[entry["task"]] = output  │
                             │                                                 │
                             │ /usr/src/app/main.py:202 in load                │
                             │                                                 │
                             │   199 │   │   return model                      │
                             │   200 │                                         │
                             │   201 │   try:                                  │
                             │ ❱ 202 │   │   return await run(_load, model)    │
                             │   203 │   except (OSError, InvalidProtobuf, Bad │
                             │   204 │   │   log.warning(f"Failed to load {mod │
                             │       '{model.model_name}'. Clearing cache.")   │
                             │   205 │   │   model.clear_cache()               │
                             │                                                 │
                             │ /usr/src/app/main.py:187 in run                 │
                             │                                                 │
                             │   184 │   if thread_pool is None:               │
                             │   185 │   │   return func(*args, **kwargs)      │
                             │   186 │   partial_func = partial(func, *args, * │
                             │ ❱ 187 │   return await asyncio.get_running_loop │
                             │   188                                           │
                             │   189                                           │
                             │   190 async def load(model: InferenceModel) ->  │
                             │                                                 │
                             │ /usr/lib/python3.10/concurrent/futures/thread.p │
                             │ y:58 in run                                     │
                             │                                                 │
                             │ /usr/src/app/main.py:198 in _load               │
                             │                                                 │
                             │   195 │   │   if model.load_attempts > 1:       │
                             │   196 │   │   │   raise HTTPException(500, f"Fa │
                             │   197 │   │   with lock:                        │
                             │ ❱ 198 │   │   │   model.load()                  │
                             │   199 │   │   return model                      │
                             │   200 │                                         │
                             │   201 │   try:                                  │
                             │                                                 │
                             │ /usr/src/app/models/base.py:53 in load          │
                             │                                                 │
                             │    50 │   │   self.download()                   │
                             │    51 │   │   attempt = f"Attempt #{self.load_a │
                             │       else "Loading"                            │
                             │    52 │   │   log.info(f"{attempt} {self.model_ │
                             │       '{self.model_name}' to memory")           │
                             │ ❱  53 │   │   self.session = self._load()       │
                             │    54 │   │   self.loaded = True                │
                             │    55 │                                         │
                             │    56 │   def predict(self, *inputs: Any, **mod │
                             │                                                 │
                             │ /usr/src/app/models/facial_recognition/detectio │
                             │ n.py:28 in _load                                │
                             │                                                 │
                             │   25 │   │   super().__init__(model_name, cache │
                             │   26 │                                          │
                             │   27 │   def _load(self) -> ModelSession:       │
                             │ ❱ 28 │   │   session = self._make_session(self. │
                             │   29 │   │   self.model = RetinaFace(session=se │
                             │   30 │   │   self.model.prepare(ctx_id=0, det_t │
                             │   31                                            │
                             │                                                 │
                             │ /usr/src/app/models/base.py:108 in              │
                             │ _make_session                                   │
                             │                                                 │
                             │   105 │   │   │   case ".armnn":                │
                             │   106 │   │   │   │   session: ModelSession = A │
                             │   107 │   │   │   case ".onnx":                 │
                             │ ❱ 108 │   │   │   │   session = OrtSession(mode │
                             │   109 │   │   │   case _:                       │
                             │   110 │   │   │   │   raise ValueError(f"Unsupp │
                             │   111 │   │   return session                    │
                             │                                                 │
                             │ /usr/src/app/sessions/ort.py:28 in __init__     │
                             │                                                 │
                             │    25 │   │   self.providers = providers if pro │
                             │    26 │   │   self.provider_options = provider_ │
                             │       self._provider_options_default            │
                             │    27 │   │   self.sess_options = sess_options  │
                             │       self._sess_options_default                │
                             │ ❱  28 │   │   self.session = ort.InferenceSessi │
                             │    29 │   │   │   self.model_path.as_posix(),   │
                             │    30 │   │   │   providers=self.providers,     │
                             │    31 │   │   │   provider_options=self.provide │
                             │                                                 │
                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │
                             │ ime/capi/onnxruntime_inference_collection.py:41 │
                             │ 9 in __init__                                   │
                             │                                                 │
                             │    416 │   │   disabled_optimizers = kwargs["di │
                             │        kwargs else None                         │
                             │    417 │   │                                    │
                             │    418 │   │   try:                             │
                             │ ❱  419 │   │   │   self._create_inference_sessi │
                             │        disabled_optimizers)                     │
                             │    420 │   │   except (ValueError, RuntimeError │
                             │    421 │   │   │   if self._enable_fallback:    │
                             │    422 │   │   │   │   try:                     │
                             │                                                 │
                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │
                             │ ime/capi/onnxruntime_inference_collection.py:48 │
                             │ 3 in _create_inference_session                  │
                             │                                                 │
                             │    480 │   │   │   disabled_optimizers = set(di │
                             │    481 │   │                                    │
                             │    482 │   │   # initialize the C++ InferenceSe │
                             │ ❱  483 │   │   sess.initialize_session(provider │
                             │    484 │   │                                    │
                             │    485 │   │   self._sess = sess                │
                             │    486 │   │   self._sess_options = self._sess. │
                             ╰─────────────────────────────────────────────────╯
                             RuntimeException: [ONNXRuntimeError] : 6 :         
                             RUNTIME_EXCEPTION : Encountered unknown exception  
                             in Initialize()

Additional information

No response

alextran1502 commented 3 months ago

Can you try docker compose down and docker compose up then try again?

rui-nar commented 3 months ago

have done that multiple times since I even tried pulling an earlier image of the machine learning container (which ended with a GPU limit exceeded). I've reproduced this issue at every time I came back to a full 1.107.0 version of all containers.

alextran1502 commented 3 months ago

I assume it is related to OpenVINO. Can you try to comment out this block of code in your docker-compose.yml file and then bring the containers down/up again? After that try to rerun the job

image

rui-nar commented 3 months ago

Commented it but I'm seeing the same error:

    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    privileged: true
#    extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
#      file: hwaccel.ml.yml
#      service: openvino # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - /volume1/docker/immich/cache:/cache
      - ${LIBRARY_LOCATION}:/usr/src/app/upload/library

did down and then up

Error log:

[07/02/24 19:50:27] INFO     Starting gunicorn 22.0.0                           
[07/02/24 19:50:27] INFO     Listening at: http://[::]:3003 (9)                 
[07/02/24 19:50:27] INFO     Using worker: app.config.CustomUvicornWorker       
[07/02/24 19:50:27] INFO     Booting worker with pid: 10                        
[07/02/24 19:50:38] INFO     Started server process [10]                        
[07/02/24 19:50:38] INFO     Waiting for application startup.                   
[07/02/24 19:50:38] INFO     Created in-memory cache with unloading after 300s  
                             of inactivity.                                     
[07/02/24 19:50:38] INFO     Initialized request thread pool with 4 threads.    
[07/02/24 19:50:38] INFO     Application startup complete.                      
[07/02/24 19:50:51] INFO     Attempt #2 to load detection model 'buffalo_l' to  
                             memory                                             
[07/02/24 19:50:51] INFO     Setting execution providers to                     
                             ['CPUExecutionProvider'], in descending order of   
                             preference                                         
[07/02/24 19:50:53] INFO     Attempt #2 to load recognition model 'buffalo_l' to
                             memory                                             
[07/02/24 19:50:53] INFO     Setting execution providers to                     
                             ['CPUExecutionProvider'], in descending order of   
                             preference                                         
2024-07-02 19:51:06.264285242 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {1,512} does not match actual shape of {2,512} for output 683
2024-07-02 19:51:26.939772166 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {1,512} does not match actual shape of {4,512} for output 683
2024-07-02 19:51:41.587258111 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {1,512} does not match actual shape of {15,512} for output 683
2024-07-02 19:51:43.470866279 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {1,512} does not match actual shape of {4,512} for output 683
2024-07-02 19:51:51.199545630 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {1,512} does not match actual shape of {2,512} for output 683
...
rui-nar commented 3 months ago

so, it would seem that the problem is with Openvino. Anything I can do about it ?

mertalev commented 3 months ago

The OpenVINO issue is being tracked in #8226.

vivibro commented 3 months ago

so, it would seem that the problem is with Openvino. Anything I can do about it ? Have you solved the above problem? I have the same problem as you. Expected shape from model of {1,512} does not match actual shape of {xxxx} for output 683

mertalev commented 3 months ago

Expected shape from model of {1,512} does not match actual shape of {xxxx} for output 683

I'll correct this, but it's harmless and you can ignore it for now.