scylladb / scylla-manager

The Scylla Manager
https://manager.docs.scylladb.com/stable/
Other
48 stars 33 forks source link

SM 3.2.7 not showing the schedule field on the `sctool status` output #3807

Closed marcelohca closed 1 month ago

marcelohca commented 2 months ago

After upgrading Scylla Manager for about 30 clusters, I've noted that the command sctool status is not showing the Schedule information anymore.

[centos@ip-10-0-4-196 ~]$ sc cluster manager sctool --cluster-id 217 -- tasks
╭─────────────────────────────────────────────┬──────────┬────────┬──────────┬─────────┬───────┬────────────────────────┬────────────────────────┬────────┬────────────────────────╮
│ Task                                        │ Schedule │ Window │ Timezone │ Success │ Error │ Last Success           │ Last Error             │ Status │ Next                   │
├─────────────────────────────────────────────┼──────────┼────────┼──────────┼─────────┼───────┼────────────────────────┼────────────────────────┼────────┼────────────────────────┤
│ backup/bece44db-b658-439a-8572-e7a65269ca89 │          │        │          │ 256     │ 0     │ 16 Apr 24 16:35:54 UTC │                        │ DONE   │ 17 Apr 24 16:35:11 UTC │
│ healthcheck/cql                             │          │        │          │ 1477452 │ 98    │ 17 Apr 24 08:24:53 UTC │ 16 Apr 24 18:46:57 UTC │ DONE   │ 17 Apr 24 08:25:08 UTC │
│ healthcheck/alternator                      │          │        │          │ 1477451 │ 98    │ 17 Apr 24 08:24:53 UTC │ 16 Apr 24 18:46:57 UTC │ DONE   │ 17 Apr 24 08:25:08 UTC │
│ healthcheck/rest                            │          │        │          │ 369363  │ 25    │ 17 Apr 24 08:24:53 UTC │ 16 Apr 24 18:46:57 UTC │ DONE   │ 17 Apr 24 08:25:53 UTC │
│ repair/bcb3658c-c570-4812-9f85-acadf0de9ce8 │          │        │          │ 36      │ 0     │ 11 Apr 24 15:45:56 UTC │                        │ DONE   │ 18 Apr 24 15:35:13 UTC │
╰─────────────────────────────────────────────┴──────────┴────────┴──────────┴─────────┴───────┴────────────────────────┴────────────────────────┴────────┴────────────────────────╯
[centos@ip-10-0-4-196 ~]$

But it seems that the schedules still working as we have the next activation date set on the sctool info output:

[centos@ip-10-0-4-196 ~]$ sc cluster manager sctool --cluster-id 217 -- info backup/bece44db-b658-439a-8572-e7a65269ca89
Name:   backup/bece44db-b658-439a-8572-e7a65269ca89
Cron:   {"spec":"","start_date":"0001-01-01T00:00:00Z"} (next activation date: 17 Apr 24 16:35:11 UTC)
Retry:  3

Properties:
- ShowTables: 0
- cluster-id: #217
- dc: 'AWS_EU_CENTRAL_1'
- location: 'AWS_EU_CENTRAL_1:s3:scylla-cloud-backup-217-6y63tz'
- rate-limit: 'AWS_EU_CENTRAL_1:100'
- retention: 14
- snapshot-parallel: <nil>
- units: <nil>
- upload-parallel: <nil>
- with-hosts: <nil>

╭──────────────────────────────────────┬────────────────────────┬──────────┬────────╮
│ ID                                   │ Start time             │ Duration │ Status │
├──────────────────────────────────────┼────────────────────────┼──────────┼────────┤
│ 4b29c9c7-fc0f-11ee-9b2d-02392ac9b644 │ 16 Apr 24 16:35:11 UTC │ 43s      │ DONE   │
│ 20c019c0-fb46-11ee-9b2c-02392ac9b644 │ 15 Apr 24 16:35:11 UTC │ 1m18s    │ DONE   │
│ f65648f1-fa7c-11ee-9b2b-02392ac9b644 │ 14 Apr 24 16:35:11 UTC │ 1m7s     │ DONE   │
│ cbec978a-f9b3-11ee-9b2a-02392ac9b644 │ 13 Apr 24 16:35:11 UTC │ 47s      │ DONE   │
│ a182c122-f8ea-11ee-9b29-02392ac9b644 │ 12 Apr 24 16:35:11 UTC │ 57s      │ DONE   │
│ 7719213e-f821-11ee-9b28-02392ac9b644 │ 11 Apr 24 16:35:11 UTC │ 51s      │ DONE   │
│ 4caf67e0-f758-11ee-9b26-02392ac9b644 │ 10 Apr 24 16:35:11 UTC │ 1m3s     │ DONE   │
│ 22459f82-f68f-11ee-9b25-02392ac9b644 │ 09 Apr 24 16:35:11 UTC │ 49s      │ DONE   │
│ f7dbd856-f5c5-11ee-9b24-02392ac9b644 │ 08 Apr 24 16:35:11 UTC │ 49s      │ DONE   │
│ cd7210ca-f4fc-11ee-9b23-02392ac9b644 │ 07 Apr 24 16:35:11 UTC │ 50s      │ DONE   │
╰──────────────────────────────────────┴────────────────────────┴──────────┴────────╯
[centos@ip-10-0-4-196 ~]$
[centos@ip-10-0-4-196 ~]$ sc cluster manager sctool --cluster-id 217 -- info repair/bcb3658c-c570-4812-9f85-acadf0de9ce8
Name:   repair/bcb3658c-c570-4812-9f85-acadf0de9ce8
Cron:   {"spec":"","start_date":"0001-01-01T00:00:00Z"} (next activation date: 18 Apr 24 15:35:13 UTC)
Retry:  3

╭──────────────────────────────────────┬────────────────────────┬──────────┬────────╮
│ ID                                   │ Start time             │ Duration │ Status │
├──────────────────────────────────────┼────────────────────────┼──────────┼────────┤
│ 16faf8ca-f819-11ee-9b27-02392ac9b644 │ 11 Apr 24 15:35:13 UTC │ 10m42s   │ DONE   │
│ ee16d8d7-f298-11ee-9b1f-02392ac9b644 │ 04 Apr 24 15:35:13 UTC │ 9m54s    │ DONE   │
│ c5328806-ed18-11ee-9b17-02392ac9b644 │ 28 Mar 24 15:35:13 UTC │ 10m21s   │ DONE   │
│ 9c4e4d02-e798-11ee-9b0f-02392ac9b644 │ 21 Mar 24 15:35:13 UTC │ 9m57s    │ DONE   │
│ 736a1a32-e218-11ee-9b07-02392ac9b644 │ 14 Mar 24 15:35:13 UTC │ 9m3s     │ DONE   │
│ 4a85d849-dc98-11ee-9aff-02392ac9b644 │ 07 Mar 24 15:35:13 UTC │ 9m46s    │ DONE   │
│ 21a18a95-d718-11ee-9af7-02392ac9b644 │ 29 Feb 24 15:35:13 UTC │ 10m18s   │ DONE   │
│ f8bd5813-d197-11ee-9aef-02392ac9b644 │ 22 Feb 24 15:35:13 UTC │ 11m20s   │ DONE   │
│ cfd905ec-cc17-11ee-9ae7-02392ac9b644 │ 15 Feb 24 15:35:13 UTC │ 10m1s    │ DONE   │
│ a6f4b8c8-c697-11ee-9adf-02392ac9b644 │ 08 Feb 24 15:35:13 UTC │ 10m35s   │ DONE   │
╰──────────────────────────────────────┴────────────────────────┴──────────┴────────╯
[centos@ip-10-0-4-196 ~]$
Michal-Leszczynski commented 2 months ago

@karol-kokoszka this looks like some bug related to mixing start date and cron.

karol-kokoszka commented 2 months ago

It turns out that the deprecated interval is not handled correctly in the output. I checked the given cluster and the schedule is defined by interval instead of cron.

Added 3.28 milestone, so it's gonna be fixed with the next patch release.

mykaul commented 2 months ago

@karol-kokoszka - let's make sure we have a test for this simple output. It's not the 1st time (I believe) we are missing something in the status command.

marcelohca commented 2 months ago

@karol-kokoszka So I can consider that it's safe to continue manager upgrade to 3.2.7 for all the clusters, right?

karol-kokoszka commented 2 months ago

@karol-kokoszka So I can consider that it's safe to continue manager upgrade to 3.2.7 for all the clusters, right?

Yes

karol-kokoszka commented 2 months ago

Grooming notes

@mikliapko is checking the DTests that is expected to be done by today. We must decide then where to keep the CLI tests. It can be either DTests (what means that we are not dropping the support for them) or we can keep them in Scylla-Manager repository and execute as a part of CI.

For the moment the test will be in Scylla Manager repository. The discussion about Dtests vs SM repo to be continued.