Vadims06 / ospfwatcher

History of all changes in OSPF Topology
https://topolograph.com/ospf-monitoring
GNU General Public License v3.0
61 stars 8 forks source link

Simple example of configuration on only one server #7

Closed lyma closed 6 months ago

lyma commented 11 months ago

Hi @Vadims06

Thank you for your excellent project!!

Can you provide an example of an "all in one" server installation?

I managed to run Topolograph using Dockes, I created the GRE tunnel, OSPF is in the neighborhood, everything is ok, but it doesn't pass the data to Topograph nor did it create the "Index Templates" in kibana (running on the same server with docker too).

I really don't know what could be happening, and since I don't have much experience with docker, I can't debug it properly...

If you need more information, please let me know.

Regards

Vadims06 commented 11 months ago

Hi @lyma , Thank you for submitting the request. I'm working on a fix and I will update you shortly.

Vadims06 commented 11 months ago

Hi @lyma, I applied some changes into OSPF Watcher, could you please make git pull uncomment DEBUG_BOOL="True" in .env file here, and then

docker-compose build
docker-compose up -d

and check docker logs logstash output. It should print each event in the log. Then try to search events on OSPF Monitoring page, they should appear on the page. If not - I suggest to check logs in DB, how to check I added in ReadME here. Keep me updated on this.

lyma commented 11 months ago

Hi @lyma, I applied some changes into OSPF Watcher, could you please make git pull uncomment DEBUG_BOOL="True" in .env file here, and then

docker-compose build
docker-compose up -d

and check docker logs logstash output. It should print each event in the log. Then try to search events on OSPF Monitoring page, they should appear on the page. If not - I suggest to check logs in DB, how to check I added in ReadME here. Keep me updated on this.

Hi @Vadims06

Thank you for response.

The "watcher"container still reseting after seconds... so i cant see logs there:

root@topolograph:~/ospfwatcher# docker exec -it watcher cat /home/watcher/watcher/logs/watcher.log
cat: /home/watcher/watcher/logs/watcher.log: No such file or directory
CONTAINER ID   IMAGE                         COMMAND                  CREATED          STATUS          PORTS                                                                                                                                                                                NAMES
7c006e886e28   ospfwatcher_watcher           "python pytail.py"       22 minutes ago   Up 5 seconds                                                                                                                                                                                         watcher
efe4ab4ddc90   ospfwatcher_logstash          "/usr/local/bin/dock…"   22 minutes ago   Up 22 minutes   5044/tcp, 9600/tcp                                                                                                                                                                   logstash
25a6e188fb61   quagga:1.0                    "/sbin/tini -- /usr/…"   22 minutes ago   Up 22 minutes                                                                                                                                                                                        quagga
1ca637a37d90   docker-elk_logstash           "/usr/local/bin/dock…"   25 minutes ago   Up 25 minutes   0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, :::9600->9600/tcp, 0.0.0.0:50000->50000/tcp, :::50000->50000/tcp, 0.0.0.0:50000->50000/udp, :::50000->50000/udp   docker-elk_logstash_1
9c7236f5c592   docker-elk_kibana             "/bin/tini -- /usr/l…"   25 minutes ago   Up 25 minutes   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                                                                                                                            docker-elk_kibana_1
0ee6806b4b21   docker-elk_elasticsearch      "/bin/tini -- /usr/l…"   25 minutes ago   Up 25 minutes   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp                                                                                                 docker-elk_elasticsearch_1
13db9327eaee   nginx:latest                  "/docker-entrypoint.…"   54 minutes ago   Up 54 minutes   80/tcp, 0.0.0.0:8080->8079/tcp, :::8080->8079/tcp                                                                                                                                    webserver
9f8e801b64e2   vadims06/topolograph:latest   "gunicorn -w 4 --bin…"   54 minutes ago   Up 54 minutes   5000/tcp                                                                                                                                                                             flask
5799c6c5c684   mongo:4.0.8                   "docker-entrypoint.s…"   54 minutes ago   Up 54 minutes   27017/tcp                                                                                                                                                                            mongodb
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-10-31T19:09:22,626][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-10-31T19:09:22,635][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-x86_64]"}
[2023-10-31T19:09:22,637][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2023-10-31T19:09:22,673][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-10-31T19:09:22,684][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-10-31T19:09:23,158][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"acc819cb-3fb8-4516-a817-d130e1b07bfd", :path=>"/usr/share/logstash/data/uuid"}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.1/lib/sinatra/base.rb:931: warning: constant Tilt::Cache is deprecated
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
[2023-10-31T19:09:23,880][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-10-31T19:09:26,005][INFO ][org.reflections.Reflections] Reflections took 73 ms to scan 1 urls, producing 119 keys and 417 values
[2023-10-31T19:09:26,995][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:27,037][WARN ][deprecation.logstash.inputs.file] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:28,610][WARN ][deprecation.logstash.codecs.json] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:28,726][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:28,796][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:28,855][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-10-31T19:09:29,115][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//192.168.1.179:9200"]}
[2023-10-31T19:09:29,446][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@192.168.1.179:9200/]}}
[2023-10-31T19:09:29,681][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@192.168.1.179:9200/"}
[2023-10-31T19:09:29,694][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.10.4) {:es_version=>8}
[2023-10-31T19:09:29,696][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2023-10-31T19:09:29,733][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-10-31T19:09:29,734][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-10-31T19:09:29,778][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:disabled}
[2023-10-31T19:09:29,837][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x64b87ab1 run>"}
[2023-10-31T19:09:31,364][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.52}
[2023-10-31T19:09:31,672][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_6cb83d58431afcf4073e44421140fad4", :path=>["/home/watcher/watcher/logs/watcher.log"]}
[2023-10-31T19:09:31,686][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-10-31T19:09:31,733][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-10-31T19:09:31,737][INFO ][filewatch.observingtail  ][main][watcher] START, creating Discoverer, Watch with file and sincedb collections

root@topolograph:~/topolograph-docker# docker exec -it quagga cat /var/log/quagga/ospfd.log
2023/10/27 19:20:10 OSPF: [SHWNK-NWT5S][EC 100663304] No such command on config line 16: ip protocol ospf route-map TO_KERNEL
2023/10/27 19:20:11 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:15 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:19 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:23 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:28 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:32 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:37 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:41 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:46 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:50 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:20:56 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:00 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:07 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:11 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:22 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:26 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:43 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
2023/10/27 19:21:47 OSPF: [G3N32-0AWQH] Vty connection from 127.0.0.1
root@topolograph:~# docker exec -it watcher /bin/bash
root@topolograph:/home/watcher/watcher# set

BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:expand_aliases:extquote:force_fignore:globasciiranges:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=([0]="0")
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="5" [1]="1" [2]="4" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='5.1.4(1)-release'
COLUMNS=187
DIRSTACK=()
EUID=0
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
GROUPS=()
GROUP_ID=1000
HISTFILE=/root/.bash_history
HISTFILESIZE=500
HISTSIZE=500
HOME=/root
HOSTNAME=topolograph
HOSTTYPE=x86_64
IFS=$' \t\n'
LANG=C.UTF-8
LINES=61
MACHTYPE=x86_64-pc-linux-gnu
MAILCHECK=60
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PIPESTATUS=([0]="1")
PPID=0
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/home/watcher/watcher
PYTHON_GET_PIP_SHA256=c518250e91a70d7b20cceb15272209a4ded2a0c263ae5776f129e0d9b5674309
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/3cb8888cc2869620f57d5d2da64da38f516078c7/public/get-pip.py
PYTHON_PIP_VERSION=21.2.4
PYTHON_SETUPTOOLS_VERSION=57.5.0
PYTHON_VERSION=3.9.9
QUAGGA_HOST=127.0.0.1
SHELL=/bin/bash
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
SHLVL=1
TERM=xterm
TEST_MODE=False
TOPOLOGRAPH_HOST=192.168.1.179
TOPOLOGRAPH_PORT=8080
TOPOLOGRAPH_WEB_API_PASSWORD=ospf
TOPOLOGRAPH_WEB_API_USERNAME_EMAIL=ospf@topolograph.com
UID=0
USER_ID=1000
WATCHER_LOGFILE=/home/watcher/watcher/logs/watcher.log
WATCHER_NAME=demo-watcher
_=']'
root@topolograph:~# docker logs watcher
Traceback (most recent call last):
  File "/home/watcher/watcher/pytail.py", line 63, in <module>
lsdb_output:OSPF Router with ID (192.168.1.179)

                Router Link States (Area 0.0.0.0)

  LS age: 3
  Options: 0x2  : *|-|-|-|-|-|E|-
  LS Flags: 0x3
  Flags: 0x0
  LS Type: router-LSA
  Link State
r_post.text
    graph_obj = GraphFromTopolograph()
  File "/home/watcher/watcher/Helper.py", line 730, in __init__
    self.init_graph()
  File "/home/watcher/watcher/Helper.py", line 765, in init_graph
    raise ValueError(f"{r_post.reason}, {_error}")
ValueError: UNAUTHORIZED, Provided authorization is not valid
Traceback (most recent call last):
  File "/home/watcher/watcher/pytail.py", line 63, in <module>
lsdb_output:
       OSPF Router with ID (192.168.1.179)

                Router Link States (Area 0.0.0.0)

  LS age: 329
  Options: 0x2  : *|-|-|-|-|-|E|-
  LS Flags: 0x6
  Flags: 0x0
  LS Type: router-LSA

r_post.text
    graph_obj = GraphFromTopolograph()
  File "/home/watcher/watcher/Helper.py", line 730, in __init__
    self.init_graph()
  File "/home/watcher/watcher/Helper.py", line 765, in init_graph
    raise ValueError(f"{r_post.reason}, {_error}")
ValueError: UNAUTHORIZED, Provided authorization is not valid
lsdb_output:OSPF Router with ID (192.168.1.179)

Traceback (most recent call last):
  File "/home/watcher/watcher/pytail.py", line 63, in <module>

All passwords and ports are default. API is default too. Used IP in .env are the host IP

Any idea? :)

Regards

Vadims06 commented 11 months ago

Hi @lyma, Thank you for your output. Oh, I see. Probably, I missed it in the README, but could you please open your local Topolograph and:

  1. Create a user ospf@topolograph.com with password: ospf in Local Registration tab.
  2. Add your local subnets, i.e. 10.0.0.0/8, 172.16.0.0/20, 192.168.0.0/16 in API -> Authorised source IP

So, the error states that OSPF Watcher is not able to connect to Topolograph to parse the topology. I will update my ReadME. Thank you. Give me feedback.

Vadims06 commented 11 months ago

Updated README link.

lyma commented 11 months ago

Works like a charm! Great job!

Thanks!

PS: Any plans to implement network node name collector via SNMP as an alternative to DNS?

lyma commented 11 months ago

image

Vadims06 commented 11 months ago

PS: Any plans to implement network node name collector via SNMP as an alternative to DNS?

Actually no, it would be better to provide API to change node names in saved topology and let customers prepare node's Router ID to node's name mapping by themselves using own tools/scripts to get the data via SNMP or request it from Network Manager Tool (like Netbox, Prime, etc). The topology looks like an art. Can I share it in my social networks?) I guess there are multiple events could happen and being tracked in such big topology, so I would be glad for the chance to see how multiple events are placed on timeline's dashboard. Could you please share it here or drop it to my admin at topolograph.com box? P.S. If you don't mind sharing LSDB I could make some quick analysis of your topology: asymmetric paths, bottleneck areas, etc.

Vadims06 commented 11 months ago

I also will be grateful for your feedback on how the installation process was going? Was it clear to follow guidance/README? Do you want something to automate in this process, like create multiple GRE tunnels with quagga daemons in separate isolated namespaces via single command or something like this?..

lyma commented 11 months ago

You can use the topology image. No problem.

I still find the installation process manual somewhat monolithic and confusing for the general public.

It would be interesting to have two different installation manuals:

1 - For those who already have a Topolograph running and an ELK running; 2 - One for those who have nothing running and want to run everything on a single server. This manual would already show how to install ELK and Topolograph from scratch, using Docker, and integrating with OSPFWatcher.

I noticed that in my case (all-in-one-with-docker) ospfwatcher is not creating new information in the "OSPF Monitoring" tab. It created 1 saved graphs at startup and only this:

image

Would you like to see any logs?

Vadims06 commented 10 months ago

@lyma , sorry for the delay. Yeah, could you please make sure that topology changes are stored in Database. Could you please share an output of #3.ii of Troubleshooting section link

docker exec -it mongo /bin/bash

Inside container (change):

mongo mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@mongodb:27017/admin?gssapiServiceName=mongodb
use admins

Check the last two/N records in adjancency changes (adj_change) or cost changes (cost_change)

db.adj_change.find({}).sort({_id: -1}).limit(2)
db.cost_change.find({}).sort({_id: -1}).limit(2)

Meanwhile, I'm going to create ospf watcher environment from scratch and check if topology changes are exported to OSPF Monitoring.

Vadims06 commented 8 months ago

Hi @lyma, I hope you haven't lost any interest to OSPF Watcher yet :) I think I found the root cause of the issue. I've noticed that topolograph compose file was not updated and that's why you might got the following error during your installation process ERROR: Network topolograph_backend declared as external, but could not be found. Please create the network manually using `docker network create topolograph_backend` and try again. And after creation of topolograph_backend network - ospfwatcher is not able to reach topolograph to save topology changes logs. So in order to fix it, please update Topolograph's docker-compose file via

cd /topolograph-docker
docker-compose down
docker network remove topolograph_backend
git pull
docker-compose up -d

and check existence of logs after some time. I'm looking forward to hearing from you soon