Closed Foxeronie closed 3 months ago
Hi @Foxeronie,
Thanks for the report. I will have a look.
Best regards, Eric
Hi @Foxeronie,
Can you please pull the image and try again. The issue should be fixed.
Best regards, Eric
Hi Eric,
this problem seems to be solved. Thank you! :) But now I'm running in the next error with this pod.
I0412 19:32:26.050838 1 database.go:285] "Connecting to database" logger="database"
2024-04-12T21:32:26.417133870+02:00 I0412 19:32:26.417013 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417151387+02:00 I0412 19:32:26.417046 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="context canceled"
2024-04-12T21:32:26.417376280+02:00 I0412 19:32:26.417254 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="context canceled"
I0412 19:32:26.417356 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417409427+02:00 I0412 19:32:26.417372 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417406338+02:00 [invalid connection]
2024-04-12T21:32:26.417483281+02:00 I0412 19:32:26.417387 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417496599+02:00 I0412 19:32:26.417393 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417504349+02:00 I0412 19:32:26.417411 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417524360+02:00 I0412 19:32:26.417407 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
2024-04-12T21:32:26.417533293+02:00 I0412 19:32:26.417390 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="dial tcp: lookup icinga-stack-kubernetes-database: operation was canceled"
I0412 19:32:26.417633 1 driver.go:43] "Can't connect to database. Retrying" logger="database" error="context canceled"
2024-04-12T21:32:26.417790525+02:00 F0412 19:32:26.417719 1 main.go:204] can't retry: can't perform "INSERT INTO `persistent_volume_claim_ref` (`kind`, `name`, `uid`, `persistent_volume_id`) VALUES (:kind, :name, :uid, :persistent_volume_id) ON DUPLICATE KEY UPDATE `kind` = VALUES(`kind`), `name` = VALUES(`name`), `uid` = VALUES(`uid`), `persistent_volume_id` = VALUES(`persistent_volume_id`)": Error 1406 (22001): Data too long for column 'name' at row 7
failed to create fsnotify watcher: too many open files
Should I create a new issue for this?
Best regards, Patrick
A new issue is not necessary yet. Please run the following statement in the database:
ALTER TABLE persistent_volume_claim_ref MODIFY COLUMN name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
After that, if the daemon no longer crashes, please run the following query:
SELECT * FROM persistent_volume_claim_ref WHERE LENGTH(name) > 63;
And if it is ok data protection wise, please share the result. Maybe the result of the column kind
is enough.
I'm now getting the following error.
2024-04-15T09:57:52.542959591+02:00 F0415 07:57:52.542760 1 main.go:204] can't retry: can't perform "INSERT INTO `node_volume` (`mounted`, `node_id`, `device_path`) VALUES (:mounted, :node_id, :device_path) ON DUPLICATE KEY UPDATE `mounted` = VALUES(`mounted`), `node_id` = VALUES(`node_id`), `device_path` = VALUES(`device_path`)": Error 1364 (HY000): Field 'name' doesn't have a default value
Edit: Sorry I forgot the output
MariaDB [kubernetes]> SELECT * FROM persistent_volume_claim_ref WHERE LENGTH(name) > 63;
+----------------------+-----------------------+----------------------------------------------------------------------------------------+--------------------------------------+
| persistent_volume_id | kind | name | uid |
+----------------------+-----------------------+----------------------------------------------------------------------------------------+--------------------------------------+
| �6�2/`2@���4�bo��� | PersistentVolumeClaim | prometheus-rancher-monitoring-prometheus-db-prometheus-rancher-monitoring-prometheus-0 | 31eabb2b-3768-4b01-a39f-5b3523ca8540 |
+----------------------+-----------------------+----------------------------------------------------------------------------------------+--------------------------------------+
1 row in set (0.001 sec)
MariaDB [kubernetes]>
Hi @Foxeronie,
Thanks for sharing the output.
Regarding your last error: I pushed a fix. Please pull the image and try again.
Best regards, Eric
Hi Eric,
thanks for your work. The last output is fixed, sadly the next error appeared. Is it outside any normal conventions to have so long names? Not that we are generally the problem with this.
2024-04-15T14:11:28.323929591+02:00 F0415 12:11:28.323793 1 main.go:204] can't retry: can't perform "INSERT INTO `pvc` (`storage_class`, `phase`, `name`, `volume_mode`, `actual_capacity`, `desired_access_modes`, `namespace`, `uid`, `created`, `actual_access_modes`, `minimum_capacity`, `id`, `volume_name`, `resource_version`) VALUES (:storage_class, :phase, :name, :volume_mode, :actual_capacity, :desired_access_modes, :namespace, :uid, :created, :actual_access_modes, :minimum_capacity, :id, :volume_name, :resource_version) ON DUPLICATE KEY UPDATE `storage_class` = VALUES(`storage_class`), `phase` = VALUES(`phase`), `name` = VALUES(`name`), `volume_mode` = VALUES(`volume_mode`), `actual_capacity` = VALUES(`actual_capacity`), `desired_access_modes` = VALUES(`desired_access_modes`), `namespace` = VALUES(`namespace`), `uid` = VALUES(`uid`), `created` = VALUES(`created`), `actual_access_modes` = VALUES(`actual_access_modes`), `minimum_capacity` = VALUES(`minimum_capacity`), `id` = VALUES(`id`), `volume_name` = VALUES(`volume_name`), `resource_version` = VALUES(`resource_version`)": Error 1406 (22001): Data too long for column 'name' at row 18
Best regards, Patrick
Is it outside any normal conventions to have so long names? Not that we are generally the problem with this.
I thought that they are restricted, that's why the schema is limited in that regard, but obviously they're not 😆.
Please execute the following statements to increase the available length of all volume name columns:
ALTER TABLE node_volume MODIFY COLUMN name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE pod_pvc MODIFY COLUMN volume_name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE pod_pvc MODIFY COLUMN claim_name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE pod_volume MODIFY COLUMN volume_name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE container_mount MODIFY COLUMN volume_name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE pvc MODIFY COLUMN name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE pvc MODIFY COLUMN volume_name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
ALTER TABLE persistent_volume MODIFY COLUMN name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
I thought that they are restricted, that's why the schema is limited in that regard, but obviously they're not 😆.
Ah, okay. :D
I ran all commands. One additional error appeared for table data, column "name". I also modified this column with
ALTER TABLE data MODIFY COLUMN name varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL;
and now the pod is runnning. :) Thanks for the help!
@Foxeronie I have revised the lengths of the name columns and pushed the fixes to the main branch. It's different to what I've posted earlier. It would be really great if you could try them out. However, you would have to recreate the database, i.e. drop and create it. The schema will be imported by the daemon.
Fixes relevant for this issue:
@Foxeronie I have revised the lengths of the name columns and pushed the fixes to the main branch. It's different to what I've posted earlier. It would be really great if you could try them out. However, you would have to recreate the database, i.e. drop and create it. The schema will be imported by the daemon.
Deployment was successful! 👍 Thank you. I did a fresh installation.
@Foxeronie Thanks for the prompt feedback. Much appreciated.
Affected Chart
icinga-stack
Which version of the app contains the bug?
0.3.0
Please describe your problem
Hi! After installing the icinga stack via helm, the icinga-stack-icinga-kubernetes pod keeps crashing with the following log output.
Best regards, Patrick