Closed skibbipl closed 6 months ago
Hey there @home-assistant/core, mind taking a look at this issue as it has been labeled with an integration (recorder
) you are listed as a code owner for? Thanks!
(message by CodeOwnersMention)
recorder documentation recorder source (message by IssueLinks)
I am also using Postgresql and I got the same error.
The issue is still observable in 2024.4.2
The problem
Few weeks ago I migrated my haas instance to postgresql. Today I realized that my recorder is broken :(
What version of Home Assistant Core has the issue?
core-2024.4.1
What was the last working version of Home Assistant Core?
I migrated with 2024.3 version
What type of installation are you running?
Home Assistant Container
Integration causing the issue
recorder
Link to integration documentation on our website
https://www.home-assistant.io/integrations/recorder
Diagnostics information
Logger: homeassistant.components.recorder.core Source: components/recorder/core.py:981 integration: Recorder (documentation, issues) First occurred: 20:21:07 (1 occurrences) Last logged: 20:21:07
Database error during schema migration
Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1971, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 919, in do_execute cursor.execute(statement, parameters) psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "schema_changes_pkey" DETAIL: Key (change_id)=(5) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/recorder/core.py", line 981, in _migrate_schema_and_setup_run migration.migrate_schema( File "/usr/src/homeassistant/homeassistant/components/recorder/migration.py", line 275, in migrate_schema with session_scope(session=session_maker()) as session: File "/usr/local/lib/python3.12/contextlib.py", line 144, in __exit__ next(self.gen) File "/usr/src/homeassistant/homeassistant/components/recorder/util.py", line 144, in session_scope session.commit() File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1972, in commit trans.commit(_to_root=True) File "<string>", line 2, in commit File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go ret_value = fn(self, *arg, **kw) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1257, in commit self._prepare_impl() File "<string>", line 2, in _prepare_impl File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go ret_value = fn(self, *arg, **kw) ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1232, in _prepare_impl self.session.flush() File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4296, in flush self._flush(objects) File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4431, in _flush with util.safe_reraise(): File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4392, in _flush flush_context.execute() File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 466, in execute rec.execute(self) File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 642, in execute util.preloaded.orm_persistence.save_obj( File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj _emit_insert_statements( File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 1233, in _emit_insert_statements result = connection.execute( ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1422, in execute return meth( ^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 514, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1644, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1850, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1990, in _exec_single_context self._handle_dbapi_exception( File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2357, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1971, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 919, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "schema_changes_pkey" DETAIL: Key (change_id)=(5) already exists. [SQL: INSERT INTO schema_changes (schema_version, changed) VALUES (%(schema_version)s, %(changed)s) RETURNING schema_changes.change_id] [parameters: {'schema_version': 43, 'changed': datetime.datetime(2024, 4, 7, 18, 21, 7, 577379, tzinfo=datetime.timezone.utc)}] (Background on this error at: https://sqlalche.me/e/20/gkpj)
Example YAML snippet
No response
Anything in the logs that might be useful for us?
2024-04-11 10:03:11.915 UTC [244317] haas@haas ERROR: column "last_reported_ts" of relation "states" already exists 2024-04-11 10:03:11.915 UTC [244317] haas@haas STATEMENT: ALTER TABLE states ADD last_reported_ts DOUBLE PRECISION 2024-04-11 10:03:11.924 UTC [244317] haas@haas ERROR: column "last_reported_ts" of relation "states" already exists 2024-04-11 10:03:11.924 UTC [244317] haas@haas STATEMENT: ALTER TABLE states ADD last_reported_ts DOUBLE PRECISION 2024-04-11 10:03:11.929 UTC [244317] haas@haas ERROR: duplicate key value violates unique constraint "schema_changes_pkey" 2024-04-11 10:03:11.929 UTC [244317] haas@haas DETAIL: Key (change_id)=(9) already exists. 2024-04-11 10:03:11.929 UTC [244317] haas@haas STATEMENT: INSERT INTO schema_changes (schema_version, changed) VALUES (43, '2024-04-11T10:03:11.842846+00:00'::timestamptz) RETURNING schema_changes.change_id
Additional information
haas=# select * from schema_changes; change_id | schema_version | changed -----------+----------------+------------------------------- 1 | 11 | 2021-02-03 09:39:37.28938+00 2 | 12 | 2021-04-15 20:09:58.214423+00 3 | 13 | 2021-04-15 20:09:58.218103+00 4 | 14 | 2021-05-07 20:25:09.224286+00 5 | 15 | 2021-06-03 09:58:09.746121+00 6 | 16 | 2021-06-03 09:58:09.751906+00 7 | 17 | 2021-07-08 16:22:48.104187+00 8 | 18 | 2021-07-08 16:22:48.118069+00 9 | 19 | 2021-08-04 12:32:42.440532+00 10 | 20 | 2021-08-04 12:32:42.452672+00 11 | 21 | 2021-10-08 15:05:26.164397+00 12 | 22 | 2021-10-08 15:05:26.256125+00 13 | 23 | 2021-11-18 17:18:12.707857+00 14 | 24 | 2022-02-03 21:01:47.5236+00 15 | 25 | 2022-04-07 06:32:48.940744+00 16 | 26 | 2022-05-15 20:37:02.349003+00 17 | 27 | 2022-06-12 15:01:55.407054+00 18 | 28 | 2022-06-12 15:02:04.937636+00 19 | 29 | 2022-06-12 15:02:04.972143+00 20 | 30 | 2022-10-17 06:05:32.327172+00 21 | 31 | 2023-02-07 06:24:22.223205+00 22 | 32 | 2023-02-07 06:24:46.740814+00 23 | 33 | 2023-02-07 06:24:46.744759+00 24 | 34 | 2023-03-02 22:01:38.039114+00 25 | 35 | 2023-03-02 22:01:41.244728+00 26 | 36 | 2023-04-06 06:16:57.338555+00 27 | 37 | 2023-04-06 06:16:57.355443+00 28 | 38 | 2023-04-06 06:16:58.28749+00 29 | 39 | 2023-04-06 06:16:58.309263+00 30 | 40 | 2023-04-06 06:16:58.422542+00 31 | 41 | 2023-04-06 06:16:58.427215+00 32 | 42 | 2023-11-02 07:01:40.833805+00
How big is your database?
1067 MB, but I don't think it matters...
I had a states table big 8GB so I put purge on for 5 days and did update and again same error.
I found a solution. The issue was on my side - probably caused by migration from sqlite to postgres. Root cause was incorrect sequence for schema change. The solution that worked for me was:
stop homeassistant
# just to be sure drop the column that haas created before getting the error
alter table states drop column last_reported_ts;
# get the number of records in schema_changes table
select max(change_id) from schema_changes;
# optional validation, check the current sequence number, mine was 5
select last_value from schema_changes_change_id_seq;
# put the value from step 2 and add 1 (if you have 20 records in schema_changes table you should put 21 here)
alter sequence schema_changes_change_id_seq restart with 33;
start homeassistant
After this procedure I don't see any errors in haas log:
2024-04-13 08:30:30.488 WARNING (Recorder) [homeassistant.components.recorder.migration] Database is about to upgrade from schema version: 42 to: 43
2024-04-13 08:30:30.489 WARNING (Recorder) [homeassistant.components.recorder.migration] Adding columns last_reported_ts to table states. Note: this can take several minutes on large databases and slow computers. Please be patient!
2024-04-13 08:30:30.503 WARNING (Recorder) [homeassistant.components.recorder.migration] Upgrade to version 43 done
I found a solution. The issue was on my side - probably caused by migration from sqlite to postgres. Root cause was incorrect sequence for schema change. The solution that worked for me was:
stop homeassistant # just to be sure drop the column that haas created before getting the error alter table states drop column last_reported_ts; # get the number of records in schema_changes table select max(change_id) from schema_changes; # optional validation, check the current sequence number, mine was 5 select last_value from schema_changes_change_id_seq; # put the value from step 2 and add 1 (if you have 20 records in schema_changes table you should put 21 here) alter sequence schema_changes_change_id_seq restart with 33; start homeassistant
After this procedure I don't see any errors in haas log:
2024-04-13 08:30:30.488 WARNING (Recorder) [homeassistant.components.recorder.migration] Database is about to upgrade from schema version: 42 to: 43 2024-04-13 08:30:30.489 WARNING (Recorder) [homeassistant.components.recorder.migration] Adding columns last_reported_ts to table states. Note: this can take several minutes on large databases and slow computers. Please be patient! 2024-04-13 08:30:30.503 WARNING (Recorder) [homeassistant.components.recorder.migration] Upgrade to version 43 done
It works
The problem
Few weeks ago I migrated my haas instance to postgresql. Today I realized that my recorder is broken :(
What version of Home Assistant Core has the issue?
core-2024.4.1
What was the last working version of Home Assistant Core?
I migrated with 2024.3 version
What type of installation are you running?
Home Assistant Container
Integration causing the issue
recorder
Link to integration documentation on our website
https://www.home-assistant.io/integrations/recorder
Diagnostics information
Logger: homeassistant.components.recorder.core Source: components/recorder/core.py:981 integration: Recorder (documentation, issues) First occurred: 20:21:07 (1 occurrences) Last logged: 20:21:07
Database error during schema migration
The above exception was the direct cause of the following exception:
Example YAML snippet
No response
Anything in the logs that might be useful for us?
Additional information