Closed lalit-satapathy closed 8 months ago
An important thing is to know if those metrics are per named instance or per named instance / database name ... For instance, we could have some naming prefix convention like mssql.metrics.instance.* or mssql.metrics.database.
Issues reference: https://github.com/elastic/integrations/issues/4257 https://github.com/elastic/integrations/issues/4258 https://github.com/elastic/integrations/issues/4259 https://github.com/elastic/integrations/issues/4260 https://github.com/elastic/integrations/issues/4272
Issue for reference: https://github.com/elastic/integrations/issues/4244
For passwords with special character can use percentage encoding. It is a working solution currently. https://github.com/elastic/integrations/issues/4198
I am working with a client here and found that two lines were merged into a single line for the sql server error log. We figured out that it was due to the encoding and by adding this line to the config.yml file, the logs were parsed correctly.
encoding: utf-16le
a) Is there a possibility to add this as option within the integration UI b) Support the ability to ignore files older than a certain time frame
Thanks Murali
I am working with a client here and found that two lines were merged into a single line for the sql server error log. We figured out that it was due to the encoding and by adding this line to the config.yml file, the logs were parsed correctly.
encoding: utf-16le
a) Is there a possibility to add this as option within the integration UI b) Support the ability to ignore files older than a certain time frame
Thanks Murali
Lets file and track this in separate github issue.
@lalit-satapathy is there any update on this enhancement request?
One of the critical "Named Instance support" issue has been fixed PR is merged.
Updated the Meta issue description for helping track the status better.
Any idea on when support for User defined Databases will be added? Without this the SQL Metrics integration is completely useless.
Any updates on when additional metrics will be added? The issue description mentioned that it was planned for 8.7 release, but it does not appear that any work has been done.
Any updates on when additional metrics will be added? The issue description mentioned that it was planned for 8.7 release, but it does not appear that any work has been done.
You can provide user DBs manually and run the integration as of now.
What about automatic database discovery and the additional metrics that are mentioned? It's great that user databases can be defined manually but we are a large enterprise with hundreds of SQL Clusters and countless databases hosted in those clusters. Also without the additional metrics there is a huge gap in capabilities compared to our legacy solution using System Center Operations Manager.
Add additional metrics like "deadlock" performance counters. List from @pmeresanu85. Finalise whether new data stream of existing data stream for these. new-counters.txt
It seems like no progress is being made in making this more usable and providing meaningful performance counters. #4903 for supporting automatic discovery of databases also seems to be stalled out. The lack of comprehensive performance monitoring continues to be a blocker in our adoption as we cannot properly monitor our MSSQL Servers, forcing us to keep legacy tooling in place specifically for this. Is this still being worked on?
@mholttech, We are not expanding any new performance counters currently. However there is a way to pass "Dynamic Counters" for performance metrics. Have you tried this same?
Thanks @lalit-satapathy, i'll research this. Any chance of getting #4903 implemented? We have over 800 SQL Servers and it'll be a large undertaking having to identify & configure each database to monitor on each SQL Server
Hi @lalit-satapathy,
After reviewing the implementation of this here & speaking with some of our DBA's this will not be a usable solution.
The Dynamic Counters option only accounts for the Counter Name, ignoring the fact that the counter name can be duplicated across Object Names, or more commonly there can be multiple instance_names for the same counter. The way the dynamic counters query is structured it has no way of differentiating between instance_names.
A few of the use cases they have are:
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Number of Deadlocks/sec'
This results in 15 different counters with different instance_names, they need the _Total
instance_name which they would normally query with:
SELECT *
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Locks'
AND counter_name = 'Number of Deadlocks/sec'
AND instance_name = '_Total'
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Errors/sec'
This results in 5 different instance_names, our DBA's need instance_names User Errors
, Kill Connection Errors
, & DB Offline
Errors.
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Log Growths'
This results in 8 different instance_names, they need the instance_name _Total
Hi @mholttech
I dont really see a problem here:
Unless I am missing something ... for my own curiosity (I am not a DBA) but is _Total
a virtual instance name of MSSQL servers ...?
@Danouchka,
If you look at the implementation of the query being run for Dynamic Counters here you can see that the data collected does not extract instance_name so there is no differentiation between the different instance names.
So, when I run the query as built in the integration:
SELECT counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE counter_name like 'Log Grow%'
It returns:
counter_name | cntr_value |
---|---|
Log Growths | 0 |
Log Growths | 0 |
Log Growths | 1 |
Log Growths | 0 |
Log Growths | 0 |
Log Growths | 0 |
Log Growths | 1 |
Log Growths | 0 |
If I modify it to
SELECT counter_name, instance_name, cntr_value FROM sys.dm_os_performance_counters WHERE counter_name like 'Log Grow%'
It returns:
counter_name | instance_name | cntr_value |
---|---|---|
Log Growths | OperationsManager | 0 |
Log Growths | OperationsManagerDW | 0 |
Log Growths | tempdb | 1 |
Log Growths | model | 0 |
Log Growths | mssqlsystemresource | 0 |
Log Growths | msdb | 0 |
Log Growths | _Total | 1 |
Log Growths | master | 0 |
I wouldn't mind being limited to Dynamic Counters returning all of them, as long as the instance_name context is available. It would also be useful to be able to define multiple counter names. Right now you can only define one pattern which is also very limiting.
In the meantime, I've began exploring the SQL Input integration and it looks promising, however it is missing the ability to merge multiple queries into a single event as documented in the underlying metricbeat integration here and it does not include mssql.metrics.server_name. Having the merge capability along with populating mssql.metrics.server_name would allow using the SQL Input module to compliment the native Performance Metrics integrations.
Edit: I noticed that mssql.metrics.server_name is populated from it's own query so if merge_results is added on the SQL Input integration then we will be able to move forward with a solution for custom performance metrics.
OK understood ! Good Catch, Thanks
I have opened #6243 since the new SQL Input integration is very close to filling the gap. It is missing the merge_results option, but if that is added we will then be able to easily add any additional SQL performance metrics that our DBAs desire.
That only leaves a gap in automatic discovery of the databases which has stalled under #4903. While we can add database names individually, automatic discovery of the databases will simplify the onboarding process of our 800+ MSSQL Servers.
I've been able to work around the merge_results limitation by using "UNION ALL" to combine multiple quires into a single query. Would still be nice to see merge_results implemented to make it easier though
Here's what i'm running in the SQL Input plugin for anyone looking to do the same,
- query: "SELECT REPLACE(concat(object_name, ' ', counter_name, instance_name),' ', '') as counter_name, convert(varchar(12),cntr_value) as cntr_value FROM sys.dm_os_performance_counters WHERE counter_name LIKE 'Processes blocked%' UNION ALL SELECT REPLACE(concat(object_name, ' ', counter_name, ' ', instance_name),' ', '') as counter_name, convert(varchar(12),cntr_value) FROM sys.dm_os_performance_counters WHERE ( object_name = 'SQLServer:SQL Errors' AND counter_name = 'Errors/sec' AND instance_name in ('User Errors', 'Kill Connection Errors', 'DB Offline errors')) UNION ALL SELECT REPLACE(concat(object_name, ' ', counter_name, instance_name),' ', '') as counter_name, convert(varchar(12),cntr_value) FROM sys.dm_os_performance_counters WHERE (object_name = 'SQLServer:Databases' AND counter_name = 'Log Growths' AND instance_name = '_Total') UNION ALL select 'server_name' as counter_name, @@SERVERNAME as cntr_value UNION ALL select 'instance_name' as counter_name, @@servicename as cntr_value"
response_format: variables
I have the data_stream set to microsoft_sqlserver.performance.custom
and run the data through this custom pipeline
PUT _ingest/pipeline/metrics-microsoft_sqlserver.performance.custom@custom
{
"processors": [
{
"remove": {
"field": "sql.driver",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"remove": {
"field": "sql.query",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "sql",
"target_field": "mssql",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"trim": {
"field": "_ingest._key",
"ignore_missing": true
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": " ",
"replacement": "_"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "/",
"replacement": "_"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": ">",
"replacement": "_"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": ":",
"replacement": "_"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "\\(",
"replacement": ""
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "%",
"replacement": "pct"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "\\)",
"replacement": ""
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "__",
"replacement": "_"
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "sqlserver_",
"replacement": ""
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "mssql.metrics",
"processor": {
"gsub": {
"field": "_ingest._key",
"pattern": "sql_errors_errors_sec",
"replacement": "sql_errors_per_sec"
}
},
"ignore_failure": true
}
}
],
"on_failure": [
{
"set": {
"field": "error.message",
"value": "{{ _ingest.on_failure_message }}"
}
}
]
}
I have opened #6243 since the new SQL Input integration is very close to filling the gap. It is missing the merge_results option, but if that is added we will then be able to easily add any additional SQL performance metrics that our DBAs desire.
That only leaves a gap in automatic discovery of the databases which has stalled under #4903. While we can add database names individually, automatic discovery of the databases will simplify the onboarding process of our 800+ MSSQL Servers.
@mholttech The automatic discovery of the databases is on our roadmap and we are looking into it. We do not have a timeline as of now, by when we can ship this feature.
The additional SQL performance metrics, is low on our priority list currently. Please feel free to set up a call with us, so that we can discuss the requirements.
This earlier long issue can be closed as most issues are addressed.
For completeness, two open issues currently tracked separately are:
Yess, that's great all the progress done so far
Closing per above.
Named instance support
Currently the integration supports named instance via dynamic ports. Enhancements needed for named instance support.
https://github.com/elastic/integrations/issues/4431
Password parsing issue
Generic SQL "hosts" is compiled inline using user passed "password". While it is working for some special characters and it fails for some. Refer the issue.
Additional metrics
Planned for 8.7
Support user databases.
Supporting additional user database for transaction logs metrics.
Merge Results.