WAdama / nas_ab_status

Bash script for monitoring status of device backup in Synology Active Backup for Business in PRTG
GNU General Public License v3.0
9 stars 2 forks source link

an alternative query #8

Closed r2evans closed 8 months ago

r2evans commented 9 months ago

This doesn't produce XML, but I was inspired by your repo to form my own single-query for much of the same information. When connected to the config.db file, we can attach the activity.db (assuming current-directory here) to get to the other tables in the same command.

This saves repeated calls to sqlite3, and while repeated calls is likely performant enough, I thought it might be good (and/or useful) to unify it into a single select statement.

.mode table
attach database 'activity.db' as actdb;
with cte as (
  select dt.login_user, dt.host_name, -- rt.backup_type, rt.task_config,
    -- rt.time_start, rt.time_end,
    datetime(rt.time_start, 'unixepoch', 'localtime') as time_start,
    datetime(rt.time_end, 'unixepoch', 'localtime') as time_end,
    (case when rt.time_end is null then ''
          else format('%02.1fh', (rt.time_end - rt.time_start) / 3600.0)
          end) as hours,
    (case when drt.transfered_bytes > 1e12 then (round(drt.transfered_bytes / 1e12, 2) || ' TB')
          when drt.transfered_bytes > 1e9 then (round(drt.transfered_bytes / 1e9, 2) || ' GB')
          when drt.transfered_bytes > 1e6 then (round(drt.transfered_bytes / 1e6, 2) || ' MB')
          when drt.transfered_bytes > 1e3 then (round(drt.transfered_bytes / 1e3, 2) || ' KB')
     end) as transferred,
    json_extract(rdt.other_params, '$.speed') as speed,
    -- rt.result_id,
    (case when lt.log_type=1101 then 'started'
          when lt.log_type=1102 then 'completed'
          when lt.log_type=1115 then 'trigger screenlock'
          when lt.log_type=1125 then 'timeout'
          when lt.log_type=1137 then 'being processed'
          when lt.log_type=1146 then 'missed'
          when lt.log_type=1325 then 'deletion succeeded'
          when lt.log_type=1327 then 'starting to delete'
          else 'unk' end) as result,
    dt.device_id, rt.result_id,
    rank() over (partition by dt.device_id order by rt.time_end desc) as rn
  from device_table dt
    left join backup_task_device btd on dt.device_id = btd.device_id
    left join task_table tt on tt.task_id = btd.task_id
    left join result_table rt on rt.task_id = btd.task_id
      and rt.task_config like '%\"device_id\":' || dt.device_id || ',%'
    left join device_result_table drt on rt.result_id = drt.result_id
    left join result_detail_table rdt on rt.result_id = rdt.result_id
    left join log_table lt on rt.result_id = lt.result_id
  where (rt.job_action is null or rt.job_action = 1)
    and (rdt.log_type is null or rdt.log_type = '1111')
)
select * from cte where rn <= 3
order by device_id, rn

Output (de-PII-ized):

+------------+-----------------+---------------------+---------------------+-------+-------------+-------------+-----------+-----------+-----------+----+
| login_user |    host_name    |     time_start      |      time_end       | hours | transferred |    speed    |  result   | device_id | result_id | rn |
+------------+-----------------+---------------------+---------------------+-------+-------------+-------------+-----------+-----------+-----------+----+
| sarah      | host1           |                     |                     |       |             |             | unk       | 3         |           | 1  |
| sarah      | otherhost       | 2024-01-18 08:28:33 | 2024-01-18 08:34:33 | 0.1h  | 8.24 GB     | 21.00 MB/s  | completed | 4         | 1391      | 1  |
| sarah      | otherhost       | 2024-01-09 07:51:21 | 2024-01-09 07:55:04 | 0.1h  | 4.76 GB     | 20.00 MB/s  | completed | 4         | 1372      | 2  |
| sarah      | otherhost       | 2023-12-28 15:23:10 | 2023-12-28 15:26:35 | 0.1h  | 3.83 GB     | 17.00 MB/s  | completed | 4         | 1325      | 3  |
| jack       | jacks_desktop   | 2024-01-21 04:00:02 | 2024-01-21 06:41:36 | 2.7h  | 108.96 GB   | 10.00 MB/s  | completed | 5         | 1404      | 1  |
| jack       | jacks_desktop   | 2024-01-20 04:00:03 | 2024-01-20 04:06:44 | 0.1h  | 4.99 GB     | 11.00 MB/s  | completed | 5         | 1400      | 2  |
| jack       | jacks_desktop   | 2024-01-19 04:00:02 | 2024-01-19 04:02:30 | 0.0h  | 2.09 GB     | 13.00 MB/s  | completed | 5         | 1398      | 3  |
| sarah      | host2           | 2024-01-13 15:24:34 | 2024-01-13 15:35:42 | 0.2h  | 10.09 GB    | 14.00 MB/s  | completed | 6         | 1382      | 1  |
| sarah      | host2           | 2023-12-31 10:53:09 | 2023-12-31 11:06:04 | 0.2h  | 14.82 GB    | 18.00 MB/s  | completed | 6         | 1337      | 2  |
| bob        | old_laptop      |                     |                     |       |             |             | unk       | 7         |           | 1  |
| bob        | current_laptop  | 2024-01-22 03:29:42 | 2024-01-22 08:59:56 | 5.5h  | 4.0 GB      | 197.00 KB/s | completed | 8         | 1406      | 1  |
| bob        | current_laptop  | 2024-01-20 16:43:22 | 2024-01-20 18:39:34 | 1.9h  | 78.86 GB    | 10.00 MB/s  | completed | 8         | 1401      | 2  |
| bob        | current_laptop  | 2024-01-10 21:16:17 | 2024-01-13 21:44:27 | 72.5h | 3.3 GB      | 12.00 KB/s  | timeout   | 8         | 1376      | 3  |
| sarah      | host3           |                     |                     |       |             |             | unk       | 9         |           | 1  |
+------------+-----------------+---------------------+---------------------+-------+-------------+-------------+-----------+-----------+-----------+----+

While my use is for a simple ssh console output, the method could easily be adapted to produce XML instead of the above tabular format.

I started mapping out the databases so that I could better understand the joins.

image

This is far from complete and would benefit from completing the links of foreign keys (and it does not attempt to differentiate between config.db-vs-activity.db, they appear to be "all the same db" after the attach database... above.

This can be "played with" using https://sql.toad.cz, clicking on "Save / Load", pasting the below text into the "Input/Output" text-input, then clicking "LOAD XML".

schema xml ``` INTEGER MEDIUMTEXT MEDIUMTEXT
INTEGER MEDIUMTEXT MEDIUMTEXT
INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT
INTEGER MEDIUMTEXT INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT INTEGER MEDIUMTEXT MEDIUMTEXT INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INTEGER
INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INTEGER MEDIUMTEXT MEDIUMTEXT
INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT MEDIUMTEXT INT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INT MEDIUMTEXT MEDIUMTEXT INT MEDIUMTEXT MEDIUMTEXT
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER
INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT
INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT
INTEGER
INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER
INTEGER INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT MEDIUMTEXT
INTEGER MEDIUMTEXT MEDIUMTEXT
INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER MEDIUMTEXT INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT INTEGER
INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER INTEGER MEDIUMTEXT MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER INTEGER
INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER INTEGER INTEGER
INTEGER INTEGER INTEGER INTEGER MEDIUMTEXT INTEGER INTEGER
INTEGER MEDIUMTEXT MEDIUMTEXT INTEGER INTEGER INTEGER
INTEGER INTEGER INTEGER MEDIUMTEXT
```

Offered solely as a "hope it helps".

WAdama commented 9 months ago

Hi @r2evans

This looks very interesting, of course this helps. Databases are not so much my expertise, so every input is appreciated, thanks for that.

I try to use it and to better my script.

Is this started in sqlite3 or part of script?

Regards Ingo

r2evans commented 9 months ago

I wanted to remind you that sqlite3 takes a .mode argument that changes its output format, making it much easier to use. For instance, while it doesn't support XML, it does support json, which is very easy to consume in most languages:

[{"login_user":"bob","host_name":"host1","time_start":null,"time_end":null,"hours":"","transferred":null,"speed":null,"result":"unk","device_id":3,"result_id":null,"rn":1},
{"login_user":"bob","host_name":"host2","time_start":"2024-01-23 09:40:18","time_end":"2024-01-23 09:46:44","hours":"0.1h","transferred":"12.42 GB","speed":"30.00 MB/s","result":"completed","device_id":4,"result_id":1412,"rn":1},
{"login_user":"jack","host_name":"somewhere","time_start":"2024-01-24 04:03:56","time_end":"2024-01-24 04:05:47","hours":"0.0h","transferred":"1.55 GB","speed":"13.00 MB/s","result":"completed","device_id":5,"result_id":1415,"rn":1},
{"login_user":"bob","host_name":"host3","time_start":"2024-01-13 15:24:34","time_end":"2024-01-13 15:35:42","hours":"0.2h","transferred":"10.09 GB","speed":"14.00 MB/s","result":"completed","device_id":6,"result_id":1382,"rn":1},
{"login_user":"sarah","host_name":"otherhost","time_start":null,"time_end":null,"hours":"","transferred":null,"speed":null,"result":"unk","device_id":7,"result_id":null,"rn":1},
{"login_user":"sarah","host_name":"somehost","time_start":"2024-01-23 08:43:39","time_end":"2024-01-23 08:55:05","hours":"0.2h","transferred":"4.95 GB","speed":"6.00 MB/s","result":"completed","device_id":8,"result_id":1411,"rn":1},
{"login_user":"bob","host_name":"host1","time_start":null,"time_end":null,"hours":"","transferred":null,"speed":null,"result":"unk","device_id":9,"result_id":null,"rn":1}]

It's true json (a complete list), not ndjson (newline-delimited json).

WAdama commented 9 months ago

Again, thanks for your input...

Already played with it. But it will be a little time for me to get it running, but I'm already working on it...

WAdama commented 8 months ago

In work

WAdama commented 8 months ago

Hi,

thanks again for your help.

I've just published my new PRTG script. I could also used most of your query.