Open hbrown-uiowa opened 1 year ago
With debug turned up for com.redhat.rhn, I get this additional bit:
2023-07-28 11:32:00,639 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.taskomatic.task.MgrSyncRefresh - Scheduling synchronization of all vendor channels
2023-07-28 11:32:00,643 [Thread-3] DEBUG com.redhat.rhn.common.hibernate.DefaultConnectionManager - YYY Opening Hibernate Session
2023-07-28 11:32:00,644 [Thread-3] DEBUG com.redhat.rhn.common.hibernate.DefaultConnectionManager - YYY Opened Hibernate session SessionImpl(56574899<open>)
2023-07-28 11:32:00,646 [Thread-3] DEBUG com.redhat.rhn.common.hibernate.DefaultConnectionManager - YYY Closing Hibernate Session: SessionImpl(56574899<open>)
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() called with: tasko_server.host
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> Getting property: host
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> result: null
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> returning: null
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - returning default value
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() called with: tasko_server.port
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> Getting property: port
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> result: null
2023-07-28 11:32:00,648 [DefaultQuartzScheduler_Worker-16] DEBUG com.redhat.rhn.common.conf.Config - getString() - getString() -> returning: null
2023-07-28 11:32:00,676 [Thread-4] DEBUG com.redhat.rhn.common.hibernate.DefaultConnectionManager - YYY Opening Hibernate Session
2023-07-28 11:32:00,676 [Thread-4] DEBUG com.redhat.rhn.common.hibernate.DefaultConnectionManager - YYY Opened Hibernate session SessionImpl(1632100143<open>)
2023-07-28 11:32:00,707 [Thread-4] WARN com.redhat.rhn.taskomatic.TaskoQuartzHelper - Job single-repo-sync-bunch-1 failed to schedule.
@mackdk I'm linking you here cause it seems like an issue already reported and you might be familiar with it :smiley:
This continues to be a problem on 2023.10.
The error message has changed somewhat (from rhn_taskomatic_daemon.log)
2023-11-28 11:32:00,521 [DefaultQuartzScheduler_Worker-70] ERROR com.redhat.rhn.taskomatic.task.MgrSyncRefresh - Executing a task threw an exception: org.quartz.JobExecutionException
org.quartz.JobExecutionException: com.redhat.rhn.taskomatic.TaskomaticApiException: redstone.xmlrpc.XmlRpcFault: org.quartz.ObjectAlreadyExistsException: Unable to store Job : '1.single-repo-sync-bunch-1', because one already exists with this identification.
at com.redhat.rhn.taskomatic.task.MgrSyncRefresh.execute(MgrSyncRefresh.java:136) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.task.RhnJavaJob.execute(RhnJavaJob.java:56) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.TaskoJob.doExecute(TaskoJob.java:240) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.TaskoJob.runTemplate(TaskoJob.java:193) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.TaskoJob.execute(TaskoJob.java:145) ~[rhn.jar:?]
at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ~[quartz-2.3.0.jar:?]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) ~[quartz-2.3.0.jar:?]
Caused by: com.redhat.rhn.taskomatic.TaskomaticApiException: redstone.xmlrpc.XmlRpcFault: org.quartz.ObjectAlreadyExistsException: Unable to store Job : '1.single-repo-sync-bunch-1', because one already exists with this identification.
at com.redhat.rhn.taskomatic.TaskomaticApi.invoke(TaskomaticApi.java:92) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.TaskomaticApi.scheduleSingleRepoSync(TaskomaticApi.java:172) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.task.MgrSyncRefresh.execute(MgrSyncRefresh.java:127) ~[rhn.jar:?]
... 6 more
Caused by: redstone.xmlrpc.XmlRpcFault: org.quartz.ObjectAlreadyExistsException: Unable to store Job : '1.single-repo-sync-bunch-1', because one already exists with this identification.
at redstone.xmlrpc.XmlRpcClient.handleResponse(XmlRpcClient.java:444) ~[redstone-xmlrpc-client-1.1_20071120.jar:?]
at redstone.xmlrpc.XmlRpcClient.endCall(XmlRpcClient.java:376) ~[redstone-xmlrpc-client-1.1_20071120.jar:?]
at redstone.xmlrpc.XmlRpcClient.invoke(XmlRpcClient.java:209) ~[redstone-xmlrpc-client-1.1_20071120.jar:?]
at com.redhat.rhn.taskomatic.TaskomaticApi.invoke(TaskomaticApi.java:89) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.TaskomaticApi.scheduleSingleRepoSync(TaskomaticApi.java:172) ~[rhn.jar:?]
at com.redhat.rhn.taskomatic.task.MgrSyncRefresh.execute(MgrSyncRefresh.java:127) ~[rhn.jar:?]
... 6 more
Is there db cleanup required?
We currently have java.unify_custom_channel_management = 0
set in rhn.conf but I can't set schedules either via the web interface or via spacecmd.
Problem description
The mgr-sync-refresh-default task fails to schedule and no repos are being synced unless we run spacewalk-repo-sync manually.
Unfortunately, it is working fine on our test instance.
Steps to reproduce
...
Uyuni version
Uyuni proxy version (if used)
No response
Useful logs
Additional information
There were two changes that happened during this time. We patched openSUSE Leap 15.4 to the latest OS patches, had issues with salt and downgraded.
We also tried adding a new channel and setting a schedule, the api and spacecmd refused to let us set a schedule for it and some quick web searching made it seem like all repo sync was intended to be done via the mgr-sync-refresh-default task instead. As a result, I stripped all schedules from channels.
I can't seem to find a way to see what the failing api call is or what the response is that isn't getting parsed.
Rebooting the server, restarting all services doesn't seem to make a difference.