Open GikiSea opened 1 year ago
@GikiSea You need to check whether other tables have incremental data. If you need to fully synchronize historical data, you need to truncate the task_syncer_init table and restart task-syncer. In addition, you have not created the dolphinscheduler dependent table, please refer to the documentation.
@nilnon Thanks for your answer, i have run the dolphinscheduler.sql in compass. Now i find the t_ds_xx table doesn't seem to be automatically synced to compass database, but “curl localhost:8181/etl/rdb/mysql1/template.yml -X POST” works. Task-canal doesn't print error logs.
@GikiSea You need to observe the log of the task-canal-adapter module.
@nilnon Thanks, it works now. And i found task-application print error when i run a task on dolphinscheduler: the class in error logs is in hadoop-hdfs-client-3.3.4.jar, and the cluster's hadoop version i used is 3.2.1, could this be the problem?
@GikiSea Please recompile with the latest version, we optimized the configuration settings of this module. The above problem is due to the inconsistent nameservice configuration of application-hadoop.yml and application-dolphinscheduler.yml.
@nilnon i had use the same config to deploy the latest version, but task-cannal-adapter print error logs now:
Have you recreated the table? Please check which table in compass mysql lacks the restart_time field. This table structure needs to be the same as the dolphinscheduler table structure.
Actually, i had try recreated the tables. But it still prints error logs, and i check the not-matched columns(like restart_time or task_execute_type) in compass.sql and dolphinscheduler.sql, seems there're not such columns in compass tables?I look back at yesterday's, find the same errors show at first, but it looks like that this problem just turn well without any action yesterday.It makes me confused...
Which version of dolphinscheduler are you using, the dolphinscheduler.sql we provide is just an example of a version, you should use the actual dolphinscheduler version table structure.
Thanks, i have create the right structure table(i use dolphinscheduler v2.0.7).Does task-application depends on the logs i collect to operation? Task-application still prints the same errors and web show nothing(i haven't deploy flume to collect dolphin logs)
@GikiSea Yes, task-application depends on task running logs, which need to be collected and uploaded to hdfs. Does your hadoop cluster enable kerberos? If so, has the relevant kerberos information been configured according to the latest applicaiton-hadoop.yml?
@nilnon No, i used to deploy the early version compass which doesn't support kerberos, so i turn off the cluster authentication
@GikiSea Because I don't know the configuration information of application-hadoop.yml and application-dolphinscheduler.yml, so I can't judge the specific reason.
Please help take a look at this issue,i find the data of dolphinscheduler is not synced to compass(except the success tasks in task_instance)