Closed stevebwall closed 7 years ago
root@gvision:~/open-nti# docker logs opennti_input_jti
2016-12-06 09:58:06 -0800 [info]: reading config file path="/tmp/fluent.conf"
2016-12-06 09:58:06 -0800 [info]: starting fluentd-0.12.26
2016-12-06 09:58:06 -0800 [info]: gem 'fluent-plugin-juniper-telemetry' version '0.2.11'
2016-12-06 09:58:06 -0800 [info]: gem 'fluentd' version '0.12.26'
2016-12-06 09:58:06 -0800 [info]: adding match pattern="jnpr." type="copy"
2016-12-06 09:58:06 -0800 [info]: adding match pattern="debug." type="stdout"
2016-12-06 09:58:06 -0800 [info]: adding match pattern="fluent.**" type="stdout"
2016-12-06 09:58:06 -0800 [info]: adding source type="forward"
2016-12-06 09:58:06 -0800 [info]: adding source type="udp"
2016-12-06 09:58:06 -0800 [info]: adding source type="udp"
2016-12-06 09:58:06 -0800 [info]: adding source type="monitor_agent"
2016-12-06 09:58:06 -0800 [info]: adding source type="debug_agent"
2016-12-06 09:58:06 -0800 [info]: using configuration file:
If I configure the Data Collection Agent Dashboard, I am able to see data using Grafana with no issues.
I am streaming from an MX as below:
swall@tinybud> show version invoke-on all-routing-engines | match Junos: Junos: 16.1R3.10 Junos: 16.1R3.10
swall@tinybud> show configuration services analytics | display set set services analytics streaming-server GV remote-address 172.17.250.2 set services analytics streaming-server GV remote-port 50000 set services analytics export-profile EXP local-address 1.0.0.41 set services analytics export-profile EXP local-port 30010 set services analytics sensor S1 server-name GV set services analytics sensor S1 export-name EXP set services analytics sensor S1 resource /junos/system/linecard/interface/ set services analytics sensor S3 server-name GV set services analytics sensor S3 export-name EXP set services analytics sensor S3 resource /junos/system/linecard/firewall/ set services analytics sensor S4 server-name GV set services analytics sensor S4 export-name EXP set services analytics sensor S4 resource /junos/system/linecard/cpu/memory/ set services analytics sensor S5 server-name GV set services analytics sensor S5 export-name EXP set services analytics sensor S5 resource /junos/system/linecard/npu/memory/ set services analytics sensor S6 server-name GV set services analytics sensor S6 export-name EXP set services analytics sensor S6 resource /junos/system/linecard/interface/logical/usage/ set services analytics sensor S2 server-name GV set services analytics sensor S2 export-name EXP set services analytics sensor S2 resource /junos/services/label-switched-path/usage/
Hi @stevebwall
Thanks for looking into OpenNTI
Not all sensors are currently supported, sorry if this part is not clearly documented yet, I need to do that.
For now, I would recommend to keep:
Also, to support Logical interface and Firewall you need to use a different code for JTI input plugin it's very easy to upgrade
In the file docker-compose.yml, you need to update the second line and replace
juniper/open-nti-input-jtiwith
juniper/open-nti-input-jti:devel`, see example below
Before:
input-jti:
image: juniper/open-nti-input-jti
container_name: opennti_input_jti
After modification
input-jti:
image: juniper/open-nti-input-jti:devel
container_name: opennti_input_jti
Once done, you need to run make start
once again to reload the container.
This new version is under test right now and It will become the default version very soon. Please let me know if it's working better
If it's still not better, you might have to increase the MTU between your junos device and the OpenNTI server. If you have too many interfaces, Junos will send fragmented packets ..
Thanks Damien
Documentation bug
I made the change in docker-compose.yml:
input-jti: image: juniper/open-nti-input-jti:devel container_name: opennti_input_jti
Then I did 'make start'.
I also reduced my analytics config as below:
set services analytics streaming-server GV remote-address 172.17.250.2 set services analytics streaming-server GV remote-port 50000 set services analytics export-profile EXP local-address 1.0.0.41 set services analytics export-profile EXP local-port 30010 set services analytics sensor S1 server-name GV set services analytics sensor S1 export-name EXP set services analytics sensor S1 resource /junos/system/linecard/interface/ set services analytics sensor S3 server-name GV set services analytics sensor S3 export-name EXP set services analytics sensor S3 resource /junos/system/linecard/firewall/ set services analytics sensor S6 server-name GV set services analytics sensor S6 export-name EXP set services analytics sensor S6 resource /junos/system/linecard/interface/logical/usage/
Finally, I changed the mtu on my export interface:
set interfaces xe-0/0/0 mtu 4000
I am still not seeing any data for the Data Streaming Collector Dashboard.
I am running 16.1R3. Could it be that open-nti has not been fully tested with this new release?
swall@tinybud> show version invoke-on all-routing-engines | grep Junos: Junos: 16.1R3.10 Junos: 16.1R3.10
I tried downgrading to 15.1:
swall@tinybud> show version invoke-on all-routing-engines | grep Junos: Junos: 15.1F6-S3.7 Junos: 15.1F6-S3.7
I restarted my docker. I see UDP data:
root@6a38a418a3bc:/# tcpdump -i eth0 -n dst port 50000 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 07:18:19.157940 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 716 07:18:19.623563 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 716
Grafana does not report any data and the influxdb/juniper DB is empty. Not sure what else I can check to troubleshoot further....
Hi @stevebwall sorry for the delay, I was traveling last week do you have time this week for a live troubleshooting session ? i'm based on the West coast (PDT)
Yes, I am available today or tomorrow to take a look.... much appreciated.
Today works for me, please send me an email dgarros _ juniper.net
Thanks Damien, with your help this is working now. It appears that I need to stream telemetry data to the host interface IP address, not the docker IP address. This does not seem intuitive to me, but it is working. I think open-nti should be able to accept data if it makes it to the docker IP. In my case this was proven to be working via tcpdump from the docker cli.
I have installed open-nti on linux:
root@gvision:~/open-nti# cat /etc/issue Ubuntu 14.04.5 LTS \n \l
root@a3ffef65766a:/# tcpdump -i eth0 -n dst port 50000 | more
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 09:59:59.907177 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3034 09:59:59.908863 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3048 09:59:59.910632 IP 1.0.0.41.30010 > 172.17.250.2.50000: UDP, length 3042
Any clues where I can look next?