Closed jkutikacn closed 1 year ago
Hi @jkutikacn,
Thanks for the issue. It looks like the diff you are proposing is https://github.com/influxdata/telegraf/blob/master/plugins/inputs/sqlserver/azuresqldbqueries.go#L246:
--SUM(max_size)
++SUM(CAST (max_size as BIGINT))
@Trovalo thoughts on the change?
I approve it, we shouldn't go in arithmetic overflow
@jkutikacn,
I've put up https://github.com/influxdata/telegraf/pull/13870 with your change. Once artifacts are available, can you confirm the fix please? Thanks!
Hi @powersj, I can confirm the fix works! Thanks a lot and cheers!
Relevant telegraf.conf
Logs from Telegraf
System info
1.27.0
Docker
No response
Steps to reproduce
Expected behavior
Plugin shoud be adopted to hyperscale azure sql tier, which allows DB size up to 100TB and TLOG size is virtually unlimited, therefore limiting the possibilities of monitoring TLOG size makes no sense
Actual behavior
If TLOG size does not fit into integer range error is thrown Arithmetic overflow error converting expression to data type int
Additional info
line 246 of the plugin needs to be modified, for instance like this, afterwards the statement can be successfully executed on the database (test only via SSMS)
,(SELECT SUM(CAST (max_size as BIGINT)) * 8 / (1024 * 1024) FROM sys.database_files WHERE type_desc = 'LOG') AS max_log_mb