SUSE / DeepSea

A collection of Salt files for deploying, managing and automating Ceph.
GNU General Public License v3.0
160 stars 75 forks source link

[SES6] Split output to console and Salt log #1795

Closed swiftgist closed 4 years ago

swiftgist commented 4 years ago

The console output remains the same, but the output is also logged to /var/log/salt/master. A restart of the salt-master is necessary.

Signed-off-by: Eric Jackson ejackson@suse.com jira: SES-813

Checklist:

jschmid1 commented 4 years ago

@susebot run teuthology

jschmid1 commented 4 years ago

mind posting an example output?

jschmid1 commented 4 years ago

snap, I accidentally closed the PR :/

susebot commented 4 years ago

Commit 3609d146483ce02d4a3cb0463f47db75e108e420 is OK for suite deepsea:tier2. Check tests results in the Jenkins job: http://ci.ses.suse.de:8080/job/pr-deepsea/366/

swiftgist commented 4 years ago

It's in the Jira ticket, but here's the same example. The current console output would remain the same.

# salt-run rebuild.node data2*
Running command ceph osd ok-to-stop 7 2
Emptying osd 7, 2
Removing osd 7 on host data2.ceph
Checking if OSD can be destroyed
Waiting for ceph to catch up.
Waiting for osd 7 to empty
osd.7 is safe to destroy
Purging from the crushmap
Zapping the device

Removing osd 2 on host data2.ceph
Checking if OSD can be destroyed
Waiting for ceph to catch up.
Waiting for osd 2 to empty
osd.2 is safe to destroy
Purging from the crushmap
Zapping the device

Found DriveGroup <default>
Calling dg.deploy on compound target data2.ceph

The Salt master log would have these entries

2019-11-07 19:41:21,176 [salt.loaded.ext.runners.rebuild:186 ][INFO    ][912183] Processing minion: data2.ceph
2019-11-07 19:41:21,444 [salt.loaded.ext.runners.rebuild:189 ][INFO    ][912183] osds for ['7', '2']: data2.ceph
2019-11-07 19:41:21,749 [salt.loaded.ext.runners.rebuild:125 ][INFO    ][912183] Used: 2252992 KB  Available: 150370048 KB
2019-11-07 19:41:22,610 [salt.loaded.ext.runners.osd:434 ][INFO    ][912183] Running command ceph osd ok-to-stop 7 2
2019-11-07 19:41:32,314 [salt.loaded.ext.runners.osd:189 ][INFO    ][912183] Removing osd 7 on host data2.ceph
2019-11-07 19:41:32,315 [salt.loaded.ext.runners.osd:204 ][INFO    ][912183] Checking if OSD can be destroyed
2019-11-07 19:41:32,315 [salt.loaded.ext.runners.osd:283 ][INFO    ][912183] Waiting for osd 7 to empty
2019-11-07 19:42:02,747 [salt.loaded.ext.runners.osd:289 ][INFO    ][912183] osd.7 is safe to destroy
2019-11-07 19:42:06,394 [salt.loaded.ext.runners.osd:233 ][INFO    ][912183] Purging from the crushmap
2019-11-07 19:42:11,472 [salt.loaded.ext.runners.osd:189 ][INFO    ][912183] Removing osd 2 on host data2.ceph
2019-11-07 19:42:11,473 [salt.loaded.ext.runners.osd:204 ][INFO    ][912183] Checking if OSD can be destroyed
2019-11-07 19:42:11,473 [salt.loaded.ext.runners.osd:283 ][INFO    ][912183] Waiting for osd 2 to empty
2019-11-07 19:42:11,798 [salt.loaded.ext.runners.osd:289 ][INFO    ][912183] osd.2 is safe to destroy
2019-11-07 19:42:14,746 [salt.loaded.ext.runners.osd:233 ][INFO    ][912183] Purging from the crushmap
2019-11-07 19:42:17,014 [salt.loaded.ext.runners.rebuild:199 ][INFO    ][912183] osd_ret: {'7': {'returncode': True, 'path': '/dev/vdc', 'model': ''}, '2': {'returncode': True, 'path': '/dev/vdb', 'model': ''}}
2019-11-07 19:42:17,086 [salt.loaded.ext.runners.disks:144 ][INFO    ][912183] Found DriveGroup <default>
2019-11-07 19:42:17,086 [salt.loaded.ext.runners.disks:174 ][INFO    ][912183] Calling dg.deploy on compound target data2.ceph
2019-11-07 19:42:37,316 [salt.loaded.ext.runners.rebuild:159 ][INFO    ][912183] ...
jschmid1 commented 4 years ago

looks fine, we should add this to master aswell though.

jschmid1 commented 4 years ago

@swiftgist please forwardport this to master