datavane / datasophon

The next generation of cloud-native big data management expert , Aims to help users rapidly build stable, efficient, and scalable cloud-native platforms for big data.
https://datasophon.github.io/datasophon-website/
Apache License 2.0
1.01k stars 363 forks source link

[Bug] [api] Database initialization script execute failed when starting the service #510

Closed luoyajun10 closed 4 months ago

luoyajun10 commented 4 months ago

Search before asking

What happened

I am a newbie on Datasophon. When start the datasophon api service for the first time, I got the 'Data too long for column' error.

What you expected to happen

The error log as follows:

2024-02-22 09:07:24 [main] ERROR com.datasophon.api.migration.DatabaseMigration - Script execute failed! V1.1.0__DML.sql
org.apache.ibatis.jdbc.RuntimeSqlException: Error executing: INSERT INTO `t_ddh_cluster_alert_quota` VALUES (483, 'NameNode���������GC������������[5m]', 'HDFS', 'avg_over_time(Hadoop_NameNN
ode_GcTimeMillisPS_MarkSweep{job=\"namenode\"}[5m])/1000', 1, 2, 2, '���������GC�����￿
����������������������������������', '>', 60, 1,:q
1, 60, 'NameNode', 1, '2022-07-14 14:22:36')
.  Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'alert_quota_name' at row 1
        at org.apache.ibatis.jdbc.ScriptRunner.executeLineByLine(ScriptRunner.java:150)
        at org.apache.ibatis.jdbc.ScriptRunner.runScript(ScriptRunner.java:110)
        at com.datasophon.api.migration.DatabaseMigration.runScript(DatabaseMigration.java:240)
        at com.datasophon.api.migration.DatabaseMigration.doMigration(DatabaseMigration.java:140)
        at com.datasophon.api.migration.DatabaseMigration.doMigrations(DatabaseMigration.java:125)
        at com.datasophon.api.migration.DatabaseMigration.migration(DatabaseMigration.java:94)
        at com.datasophon.api.configuration.DatabaseMigrationAware.setApplicationContext(DatabaseMigrationAware.java:43)
        at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:128)

After an inspection, the reason is that when initializing the table 't_ddh_cluster_alert_quota', the data exceeded the length 32 for the field 'alert_quota_name'. Successfully when adjusted the field length to 255.

How to reproduce

Refer to the DDL and DML statements:

CREATE TABLE `t_ddh_cluster_alert_quota`  (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
  `alert_quota_name` varchar(32)  DEFAULT NULL COMMENT '告警指标名称',
  ...
  ...
  PRIMARY KEY (`id`)
) AUTO_INCREMENT = 628 DEFAULT CHARSET=utf8mb4 COMMENT = '集群告警指标表 ' ROW_FORMAT = DYNAMIC;
INSERT INTO `t_ddh_cluster_alert_quota` VALUES (483, 'NameNode老年代GC持续时间[5m]', 'HDFS', 'avg_over_time(Hadoop_NameNode_GcTimeMillisPS_MarkSweep{job=\"namenode\"}[5m])/1000', 1, 2, 2, '老年代GC时间过长,可考虑加大堆内存', '>', 60, 1, 1, 60, 'NameNode', 1, '2022-07-14 14:22:36');

It is thus clear, the length of value'NameNode老年代GC持续时间[5m]' is 35.

Anything else

The charset of filesystem and database is utf8/utf8mb4.

Version

dev

Are you willing to submit PR?

Code of Conduct