openconfig / gnmic

gNMIc is a gNMI CLI client and collector
https://gnmic.openconfig.net
Apache License 2.0
191 stars 57 forks source link

Juniper interface statistics #261

Open oh-c opened 1 year ago

oh-c commented 1 year ago

Hi,

I'm trying to figure out if I'm going insane or I should open a case with Juniper.

I'm trying to get interface statistics using gnmic. Doing the following subscribe request results in one metric being returned (as expected).

$ gnmic sub --debug --format event --path "/interfaces/interface[name=em0]/state/counters/out-pkts" --mode once -e json
<snip>
2023/10/24 11:37:48.344466 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:283: [gnmic] sending gNMI SubscribeRequest: subscribe='subscribe:{subscription:{path:{elem:{name:"interfaces"} elem:{name:"interface" key:{key:"name" value:"em0"}} elem:{name:"state"} elem:{name:"counters"} elem:{name:"out-pkts"}}} mode:ONCE}', mode='ONCE', encoding='JSON', to junos
[
  {
    "name": "default-1698147468",
    "timestamp": 1698147472368407362,
    "tags": {
      "interface_name": "em0",
      "source": "junos",
      "subscription-name": "default-1698147468"
    },
    "values": {
      "/interfaces/interface/state/counters/out-pkts": 7114902
    }
  }
]

But trying to retrieve in-octets I get everything for all interfaces.

$ gnmic sub --debug --format event --path "/interfaces/interface[name=em0]/state/counters/out-octets"
2023/10/24 11:40:08.636380 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:283: [gnmic] sending gNMI SubscribeRequest: subscribe='subscribe:{subscription:{path:{elem:{name:"interfaces"} elem:{name:"interface" key:{key:"name" value:"em0"}} elem:{name:"state"} elem:{name:"counters"} elem:{name:"out-octets"}}} mode:ONCE}', mode='ONCE', encoding='JSON', to junos

<snip a huge amount of metrics>

[
  {
    "name": "default-1698147639",
    "timestamp": 1698147643771293237,
    "tags": {
      "interface_name": "xe-0/0/5:3",
      "source": "junos",
      "subscription-name": "default-1698147639"
    },
    "values": {
      "/interfaces/interface/state/oper-status": "DOWN"
    }
  },
  {
    "name": "default-1698147639",
    "timestamp": 1698147643771293237,
    "tags": {
      "interface_name": "xe-0/0/5:3",
      "source": "junos",
      "subscription-name": "default-1698147639"
    },
    "values": {
      "/interfaces/interface/state/high-speed": 10000
    }
  }
]

[
  {
    "name": "default-1698147639",
    "timestamp": 1698147643838313405,
    "tags": {
      "interface_name": "em0",
      "source": "junos",
      "subscription-name": "default-1698147639"
    },
    "values": {
      "/interfaces/interface/state/counters/out-octets": 1123033693
    }
  }
]

Anyone else collecting interface metrics on Junos devices? We're running 21.4R3-S3 software.

flyboyjon commented 1 year ago

Works ok for me (single metric returned per interval) with this :

gnmic sub --format event --path "/interfaces/interface[name=em0]/state/counters/out-octets" --address "xxxxxx:xxxx" --username xxxxxx --password xxxxxx --insecure --stream-mode sample --sample-interval 10s

Juniper MX running Version 21.2R3-S4.8, gNMIC version : 0.32.0

Have you tried a different interface or device/target ?

lazyb0nes commented 1 year ago

Works okay for me as well, I've tried

oh-c commented 12 months ago

Bizzare. I've opened a case with Juniper and their reply is that it seems to be by design that I see that behaviour. Wierd, when @lazyb0nes and @flyboyjon see the correct (at least what I would expect to be the correct) behaviour.

Unfortunately the only platform I have to test on is MX960.

flyboyjon commented 12 months ago

What happens if you try and retrieve a different single interface, like xe-0/0/0 for example ? Perhaps something unique about your em0 interface config that throws multiple responses.

lazyb0nes commented 11 months ago

Bizzare. I've opened a case with Juniper and their reply is that it seems to be by design that I see that behaviour. Wierd, when @lazyb0nes and @flyboyjon see the correct (at least what I would expect to be the correct) behaviour.

Unfortunately the only platform I have to test on is MX960. Hoenstly though, does it matter that much?, I agree it is weird, is RE old perhaps?, i am not sure why JTAC would say that it is by design. But i've had the issues of my own but only related to QFX5k devices so far actually. My MX devices are fine but they are very modern. But if i recall correctly MX960 is still being produced.

But you could just drop all the metrics you don't want later in prometheus if you don't want to store it.

lucasalvatore commented 5 months ago

fwiw i see the same behavior on mx10k3 running junos 22.2R3-S2. Also opened a case with juniper and they said its by design and to "just use grep" 🤦

lazyb0nes commented 5 months ago

Sounds like you really didn't get to the people that know what you are even talking about. I personally don't really care, Juniper has chosent to implement gNMI like prior to Junos 23.

I have other problems though, @lucasalvatore running 22.2R3.S3 if i define the following subscriptions

paths:
      - /interfaces/
      - /junos/system/linecard/interface
      - /junos/system/linecard/optics      
      - /junos/system/linecard/firewall/
      - /components/component
      - /system/alarms
    stream-mode: sample
    sample-interval: 3s

I have a very weird behaviour where apparently all counter metrics under the interfaces paths are being delayed with 4-5 minutes. I have opened a case with Juniper and we are currently investigating it becuase it started when i'd upgraded to 22.R3.S2, before that we had no issues.

Upon digging abit deeper into it, seems like it is tied to the /components paths somehow. If i have to guess it's something wrong with the internal RPC processes that is responsible for delivering metrics. interfaces are owned by another process than the others.

Just thought i should share my experience so far. But I don't see this an error on junipers part as like if you'd ask for interface data, the fact you then get description, parent ae name and such does make sense :)