DataDog / integrations-core

Core integrations of the Datadog Agent
BSD 3-Clause "New" or "Revised" License
922 stars 1.4k forks source link

Datadog-agent does not use JDBC_DRIVER_PARAMETER of oracle integration config-file #10731

Closed Thymos2k closed 2 years ago

Thymos2k commented 2 years ago

Hi guys,

We are trying to setup the oracle integration for datadog on a linux-server without the oracle instant client installed. The datadog manual states that this is possible if one uses the oracle jdbc driver instead. To use the driver one has to set the path to the driver at the "jdbc_driver_parameter" in the conf.yaml-file of the oracle integration. That is what we have done (see content of conf-file below) but datadog is not using the driver. We assume this because the error message stays the same after changing the parameter and restarting the agent.

What we have already tried:

We have also tried to set the path to the driver to a path that does not exist. Even then the error message in the logs of the agent stayed the same. That led us to the conclusion that there must be a general issue with the usage of the jdbc_driver_parameter in the conf.yaml of the oracle integration.

Best regards

Output of the info page

=============== Agent (v6.19.2)

Status date: 2021-11-25 11:06:09.691383 CET Agent start: 2021-11-17 12:56:21.548352 CET Pid: 301 Go Version: go1.13.8 Python Version: 2.7.17 Build arch: amd64 Check Runners: 4 Log Level: info

Paths

Config File: /etc/datadog-agent/datadog.yaml
conf.d: /etc/datadog-agent/conf.d
checks.d: /etc/datadog-agent/checks.d

Clocks

NTP offset: 3.072ms
System UTC time: 2021-11-25 11:06:09.691383 CET

Host Info

bootTime: 2020-11-20 00:15:52.000000 CET
kernelVersion: 4.14.35-1844.1.3.el7uek.x86_64
os: linux
platform: oracle
platformFamily: rhel
platformVersion: 7.6
procs: 262
uptime: 8700h40m30s

Hostnames

ec2-hostname: ****
hostname: ****
instance-id: ****
socket-fqdn: ****
socket-hostname: ****
hostname provider: os
unused hostname providers:
  aws: not retrieving hostname from AWS: the host is not an ECS instance, and other providers already re                                                                                              trieve non-default hostnames
  configuration/environment: hostname is empty
  gce: unable to retrieve hostname from GCE: status code 404 trying to GET http://****/comput                                                                                              eMetadata/v1/instance/hostname

Metadata

cloud_provider: AWS
hostname_source: os

========= Collector

Running Checks

cpu
---
  Instance ID: cpu [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/cpu.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 6, Total: 273,828
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:06:00.000000 CET
  Last Successful Execution Date : 2021-11-25 11:06:00.000000 CET

disk (2.8.0)
------------
  Instance ID: disk:e5dffb8bef24336f [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/disk.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 64, Total: 2,932,296
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 51ms
  Last Execution Date : 2021-11-25 11:06:07.000000 CET
  Last Successful Execution Date : 2021-11-25 11:06:07.000000 CET

file_handle
-----------
  Instance ID: file_handle [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/file_handle.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 5, Total: 228,195
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:05:59.000000 CET
  Last Successful Execution Date : 2021-11-25 11:05:59.000000 CET

io
--
  Instance ID: io [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/io.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 52, Total: 2,373,192
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:06:06.000000 CET
  Last Successful Execution Date : 2021-11-25 11:06:06.000000 CET

load
----
  Instance ID: load [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/load.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 6, Total: 273,834
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:05:58.000000 CET
  Last Successful Execution Date : 2021-11-25 11:05:58.000000 CET

memory
------
  Instance ID: memory [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/memory.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 17, Total: 775,863
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:06:05.000000 CET
  Last Successful Execution Date : 2021-11-25 11:06:05.000000 CET

network (1.15.1)
----------------
  Instance ID: network:e0204ad63d43c949 [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/network.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 31, Total: 1,414,809
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 1ms
  Last Execution Date : 2021-11-25 11:05:57.000000 CET
  Last Successful Execution Date : 2021-11-25 11:05:57.000000 CET

ntp
---
  Instance ID: ntp:d884b5186b651429 [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/ntp.d/conf.yaml.default
  Total Runs: 761
  Metric Samples: Last Run: 1, Total: 761
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 1, Total: 761
  Average Execution Time : 257ms
  Last Execution Date : 2021-11-25 10:56:23.000000 CET
  Last Successful Execution Date : 2021-11-25 10:56:23.000000 CET

oracle (2.0.1)
--------------
  Instance ID: oracle:94fd1e3f61a713c [ERROR]
  Configuration Source: file:/etc/datadog-agent/conf.d/oracle.d/conf.yaml
  Total Runs: 45,640
  Metric Samples: Last Run: 0, Total: 0
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 1, Total: 45,640
  Average Execution Time : 1ms
  Last Execution Date : 2021-11-25 11:06:08.000000 CET
  Last Successful Execution Date : Never
  Error: [Errno 2] No such file or directory: '/usr/lib/jvm'
  Traceback (most recent call last):
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/base/checks/base.py", l                                                                                              ine 820, in run
      self.check(instance)
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/oracle/oracle.py", line                                                                                               90, in check
      self.create_connection()
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/datadog_checks/oracle/oracle.py", line                                                                                               141, in create_connection
      self.ORACLE_DRIVER_CLASS, connect_string, [self._user, self._password], self._jdbc_driver
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jaydebeapi/__init__.py", line 401, in                                                                                               connect
      jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs)
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jaydebeapi/__init__.py", line 177, in                                                                                               _jdbc_connect_jpype
      jvm_path = jpype.getDefaultJVMPath()
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jpype/_core.py", line 337, in getDefau                                                                                              ltJVMPath
      return finder.get_jvm_path()
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jpype/_jvmfinder.py", line 160, in get                                                                                              _jvm_path
      jvm = method()
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jpype/_jvmfinder.py", line 215, in _ge                                                                                              t_from_known_locations
      for home in self.find_possible_homes(self._locations):
    File "/opt/datadog-agent/embedded/lib/python2.7/site-packages/jpype/_jvmfinder.py", line 120, in fin                                                                                              d_possible_homes
      for childname in sorted(os.listdir(parent)):
  OSError: [Errno 2] No such file or directory: '/usr/lib/jvm'

uptime
------
  Instance ID: uptime [OK]
  Configuration Source: file:/etc/datadog-agent/conf.d/uptime.d/conf.yaml.default
  Total Runs: 45,639
  Metric Samples: Last Run: 1, Total: 45,639
  Events: Last Run: 0, Total: 0
  Service Checks: Last Run: 0, Total: 0
  Average Execution Time : 0s
  Last Execution Date : 2021-11-25 11:06:04.000000 CET
  Last Successful Execution Date : 2021-11-25 11:06:04.000000 CET

======== JMXFetch

Initialized checks

no checks

Failed checks

no checks

========= Forwarder

Transactions

CheckRunsV1: 45,639
Connections: 0
Containers: 0
Dropped: 0
DroppedOnInput: 0
Events: 0
HostMetadata: 0
IntakeV1: 4,944
Metadata: 0
Pods: 0
Processes: 0
RTContainers: 0
RTProcesses: 0
Requeued: 3
Retried: 3
RetryQueueSize: 0
Series: 0
ServiceChecks: 0
SketchSeries: 0
Success: 96,222
TimeseriesV1: 45,639

Transaction Errors

Total number: 3
Errors By Type:

API Keys status

API key ending with **: API Key valid

========== Endpoints

https://app.datadoghq.eu - API Key ending with:

========== Logs Agent

Sending compressed logs in HTTPS to agent-http-intake.logs.datadoghq.eu on port 443
BytesSent: 0
EncodedBytesSent: 28
LogsProcessed: 0
LogsSent: 0

============ System Probe

System Probe is not running:

Errors
======
error setting up remote system probe util, socket path does not exist: stat /opt/datadog-agent/run/syspr                                                                                              obe.sock: no such file or directory

========= Aggregator

Checks Metric Sample: 9,141,442 Dogstatsd Metric Sample: 3,537,009 Event: 1 Events Flushed: 1 Number Of Flushes: 45,639 Series Flushed: 9,037,136 Service Check: 457,914 Service Checks Flushed: 503,550

========= DogStatsD

Event Packets: 0 Event Parse Errors: 0 Metric Packets: 3,537,008 Metric Parse Errors: 0 Service Check Packets: 0 Service Check Parse Errors: 0 Udp Bytes: 226,277,172 Udp Packet Reading Errors: 0 Udp Packets: 3,537,009 Uds Bytes: 0 Uds Origin Detection Errors: 0 Uds Packet Reading Errors: 0 Uds Packets: 0

Additional environment details (Operating System, Cloud provider, etc): linux

Steps to reproduce the issue:

  1. Set path to jdbc-driver in conf.yaml of oracle integration

Describe the results you received: Datadog did not use the jdbc-driver of the conf.yaml

Describe the results you expected: Datadog uses the jdbc-driver of the conf.yaml

Additional information you deem important (e.g. issue happens only occasionally): Content conf.yaml of oracle integration:

init_config:

  ## @param global_custom_queries - object - optional
  ## Providing custom queries is also supported. Each query must have 3 fields:
  ##
  ## 1. metric_prefix - This is what each metric will start with.
  ## 2. query - This is the SQL to execute. It can be a simple statement or a
  ##            multi-line script. Alls rows of the result are evaluated.
  ## 3. columns - This is a list representing each column, ordered sequentially
  ##              from left to right. There are 2 required pieces of data:
  ##                a. type - This is the submission method (gauge, count, etc.).
  ##                b. name - This is the suffix to append to the metric_prefix
  ##                          in order to form the full metric name. If `type` is
  ##                          `tag`, this column will instead be considered a tag
  ##                          and will be applied to every metric collected by
  ##                          this particular query.
  ## 4. tags (optional) - A list of tags to apply to each metric.
  ##
  ## global_custom_queries are applied to all instances where use_global_custom_queries is set to true at the
  ## instance level.
  ##
  ## This:
  ##
  ##  self.gauge('oracle.custom_query.metric1', value, tags=['tester:oracle', 'tag1:value'])
  ##  self.count('oracle.custom_query.metric2', value, tags=['tester:oracle', 'tag1:value'])
  ##
  ## is what the following example configuration would become.
  #
  # global_custom_queries:
  #  - metric_prefix: oracle.custom_query
  #    query: |  # Use the pipe if you require a multi-line script.
  #      SELECT columns
  #      FROM tester.test_table
  #      WHERE conditions
  #    columns:
  #      # Put this for any column you wish to skip:
  #      # - {}
  #      - name: metric1
  #        type: gauge
  #      - name: tag1
  #        type: tag
  #      - name: metric2
  #        type: count
  #    tags:
  #      - tester:oracle

instances:

    ## @param server - string - required
    ## The IP address or hostname of the Oracle Database Server.
    #
  - server: ***

    ## @param service_name - string - required
    ## The Oracle Database service name. To view the services available on your server,
    ## run the following query: `SELECT value FROM v$parameter WHERE name='service_names'`
    #
    service_name: ***

    ## @param user - string - required
    ## The username for the user account.
    #
    user: ***

    ## @param password - string - required
    ## The password for the user account.
    #
    password: ***

    ## @param only_custom_queries - string - optional - default: false
    ## Set this parameter to `true` if you want to skip default system, process, and
    ## tablespace metrics checks.
    #
    only_custom_queries: true

    ## @param jdbc_driver_path - string - optional
    ## Set this parameter if you are not using the oracle native client.
    ## You can also add it to your $CLASSPATH instead.
    #
    jdbc_driver_path: "/opt/oracle/product/12.1.0/client_1/jdbc/lib/ojdbc7.jar"

    ## @param tags - list of key:value elements - optional
    ## List of tags to attach to every metric, event and service check emitted by this integration.
    ##
    ## Learn more about tagging: https://docs.datadoghq.com/tagging/
    #
    # tags:
    #   - <KEY_1>:<VALUE_1>
    #   - <KEY_2>:<VALUE_2>

    ## @param use_global_custom_queries - boolean - optional - default: true
    ## Whether or not global_custom_queries should be included for this instance.
    #
    # use_global_custom_queries: true

    ## @param custom_queries - object - optional
    ## Providing custom queries is also supported. Each query must have 3 fields:
    ##
    ## 1. metric_prefix - This is what each metric will start with.
    ## 2. query - This is the SQL to execute. It can be a simple statement or a
    ##            multi-line script. All rows of the result are evaluated.
    ## 3. columns - This is a list representing each column, ordered sequentially
    ##              from left to right. There are 2 required pieces of data:
    ##                a. type - This is the submission method (gauge, count, etc.).
    ##                b. name - This is the suffix to append to the metric_prefix
    ##                          in order to form the full metric name. If `type` is
    ##                          `tag`, this column will instead be considered a tag
    ##                          and will be applied to every metric collected by
    ##                          this particular query.
    ## 4. tags (optional) - A list of tags to apply to each metric.
    ##
    ## custom_queries set here will override global_custom_queries set in the init_config section if
    ## use_global_custom_queries is set to false.
    ## Otherwise, they will be used in addition of global_custom_queries.
    ##
    ## This:
    ##
    ##  self.gauge('oracle.custom_query.metric1', value, tags=['tester:oracle', 'tag1:value'])
    ##  self.count('oracle.custom_query.metric2', value, tags=['tester:oracle', 'tag1:value'])
    ##
    ## is what the following example configuration would become.
    #
    custom_queries:
     - metric_prefix: sql.wms.output
       query: select environment, area, we, wa FROM DATADOG_MONITORING.OUTPUT
       columns:
          - name: environment
            type: tag
          - name: area
            type: tag
          - name: we
            type: gauge
          - name: wa
            type: gauge
hithwen commented 2 years ago

Hallo, If you look at the stacktrace you'll see that the error is not trying to find the driver but the JVM (jvm_path = jpype.getDefaultJVMPath()) and then it's erroring trying to access one possible default location for it (OSError: [Errno 2] No such file or directory: '/usr/lib/jvm'). This can be either because the agent does not have permissions to access this directory or because the directory does indeed not exists and the JVM is located elsewhere. If it's the former you'll need to grant the appropriate permissions and if it's the later then you'll need to set JAVA_HOME as described in the documentation. If you need more assistance with this issue please contact support.