Open OtoKiroo opened 3 months ago
Hi @OtoKiroo , if security connection is enabled in ELK it requires an certificate and attached script doesn't work, but it's not a problem, because the Index template can be created manually ( it's needed to be done only once ). So I'm following this official documentation https://www.elastic.co/guide/en/kibana/7.17/index-patterns.html:
Name: ospf-watcher-updown-events
Pattern: watcher-updown-events*
Next
Component templates -> Next
Index settings -> Next
Mapping
"@timestamp": {"type": "date"}, "watcher_time": { "type": "date", "format": "date_optional_time"}, "watcher_name": {"type": "keyword"}, "event_name": {"type": "keyword"}, "event_object": {"type": "keyword"}, "event_status": {"type": "keyword"}, "event_detected_by": {"type": "keyword"}, "graph_time": {"type": "keyword"}
Advanced options -> Disable dynamic mapping
Next Next
3. Create another template
Name: ospf-watcher-costs-changes Pattern: watcher-costs-changes* "@timestamp": {"type": "date"}, "watcher_time": { "type": "date"}, "watcher_name": {"type": "keyword"}, "event_name": {"type": "keyword"}, "event_object": {"type": "keyword"}, "event_status": {"type": "keyword"}, "old_cost": {"type": "integer"}, "new_cost": {"type": "integer"}, "event_detected_by": {"type": "keyword"}, "subnet_type": {"type": "keyword"}, "shared_subnet_remote_neighbors_ids": {"type": "keyword"}, "graph_time": {"type": "keyword"} Advanced options -> Disable dynamic mapping
Next Next
This is pretty much it. I hope it will help.
@Vadims06
I was able to create them by turning off https ceritificate then turning it back on, i also tried your method of creating them manually but it does not work. The index Templates exist, but there is nothing in "Indices". When i try to create the index pattern following the github page, it does not want to create the pattern because the ospf-watcher-costs-changes and ospf-watcher-updown-events source does not appear or exist.
"Name must match one or more data streams, indices, or index aliases".
@OtoKiroo ,
If you are on watcher-costs-changes and watcher-updown-events should be in a list.
step and do not see the following indexes, it means that the Watcher hasn't exported logs yet. Have you configured GRE tunnel between a network device and the Watcher. Please define DEBUG
as True
and check logs of logstash via docker logs logstash
, you should see exported logs to ELK.
@Vadims06 When setting the DEBUG variable to True, I get this error:
[+] Running 0/1 ⠿ logstash-index-creator Error 0.3s Error response from daemon: pull access denied for ospfwatcher_watcher, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I have tried deleting the directory and rebuilding from source, still have the same error. Oddly enough, i now have this error regardless of what i change in the .env file. Even from a fresh install i get this error now. Was there an update/change recently ?
Hi @OtoKiroo ,
I found the issue, the name of container should be ospfwatcher-watcher
instead of ospfwatcher_watcher
in docker-compose file here
@Vadims06 Thanks ! I havent had time to work again on this project but i will let you know asap if i encounter any other problems
Hey @Vadims06, ive tested the change and it doesnt seem to change anything.
⠿ logstash-index-creator Error 0.8s Error response from daemon: pull access denied for ospfwatcher-watcher, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Same error as before however you see that it is now ospfwatcher-watcher instead of ospfwatcher_watcher.
Another question, would you be able to add support for Graylog? thanks
Hi @OtoKiroo ,
did you run docker-compose build
command before? The error states that there is no such container. It will appear after build finishes. You can check that this container appears via running docker image ls
command. Please share command's output if the error persists.
Regarding Graylog - it's needed to
RUN logstash-plugin install logstash-output-gelf
into logstash/Dockerfile
and run docker-compose build
one more time.logstash/pipeline/logstash.conf
@Vadims06 I had the same error even when rebuilding. I removed the directory and redid it from scratch and it works fine now. DEBUG_BOOL="True" is uncommented, as well as the EXPORT_TO_ELASTICSEARCH_BOOL=True. The rest of the .env file is configured properly as far as i can tell, but i still do not get any logs and the indices are missing. I tried manually creating the Index Templates as well as modifying the python script to work with HTTP to create them, both methods work, but still no Indices.
As for the graylog GELF, im not familliar at all but here is what i've added to the very bottom of the logstash.conf:
output {
gelf {
host => "GRAYLOG_IP"
port => 12201
protocol => "UDP"
sender => "Logstash"
facility => "GELF"
}
}
Obviously i wont know if it works until i fix the issue though.
docker logs logstash doesnt show any logs being exported to ELK Stack from ospfwatcher.
Is the GRE tunnel needed even if using test mode ? "Note You can skip this step and run ospfwatcher in test_mode, so test LSDB from the file will be taken and test changes (loss of adjancency and change of OSPF metric) will be posted in ELK"
I've decided to make a new elk-stack in docker this time, configured the index templates manually as shown above and i still get no indices or logs sent from ospfwatcher to elk-stack.
topolograph is confured properly with a .txt file as a test from lsdb, real-time monitoring still shows ospfwatcher as not configured and without logs, ospfwatcher .env is configured properly with debug_bool="True" and test_mode="True". Export_to_elasticsearch_bool=True is uncommented as well. All the other variables are set properly, with IP, ports, username/password, etc..
@OtoKiroo ,
could you please share docker image ls
command.
Is the GRE tunnel needed even if using test mode ?
No, it's not needed.
@Vadims06 I do get these 2 errors when using docker compose up -d, the first one is irrelevant, but im not sure for the second.
WARN[0000] The "EXPORT_TO_ZABBIX_BOOL" variable is not set. Defaulting to a blank string. WARN[0000] network internal: network.external.name is deprecated. Please set network.name with external: true`
<none> <none> 6e96457bbd67 54 minutes ago 1.31GB
<none> <none> fec6db810bef 56 minutes ago 1.31GB
<none> <none> d374068d4972 About an hour ago 24.7MB
<none> <none> 79c53c615cc2 About an hour ago 649MB
<none> <none> 9dd7b537c536 About an hour ago 1.31GB
<none> <none> 0215cdf8d674 2 hours ago 24.7MB
ospfwatcher-watcher latest 7ad2fff8e426 2 hours ago 176MB
ospfwatcher-logstash latest cbe1c0ad261a 2 hours ago 908MB
quagga 1.0 7b5e383a9d80 2 hours ago 154MB
<none> <none> cec6b1ea2333 3 hours ago 645MB
<none> <none> b542413ebbfc 3 hours ago 645MB
<none> <none> 2d95ed1e6375 3 hours ago 648MB
<none> <none> a2566c999cc0 3 hours ago 648MB
<none> <none> 65c728c1ce8f 4 hours ago 648MB
<none> <none> 7fea2a7bf3ff 4 hours ago 645MB
<none> <none> de1903ef1bf9 4 hours ago 632MB
targotelecom/targo-server latest 0771e57ff625 24 hours ago 766MB
<none> <none> d535ea9c267c 29 hours ago 176MB
<none> <none> acb7cd76675c 29 hours ago 154MB
targotelecom/n8n-targo latest 4141d793db83 6 days ago 1.04GB
docker-elk-kibana latest fbe8cee6b278 3 weeks ago 1.06GB
docker-elk-logstash latest d78c3a8f6130 3 weeks ago 843MB
docker-elk-setup latest ace2f7eceb40 3 weeks ago 1.24GB
docker-elk-elasticsearch latest 7eb2de223728 3 weeks ago 1.24GB
jenkins/jenkins lts 2371da23064a 5 weeks ago 462MB
vadims06/topolograph latest befe30844b0b 8 weeks ago 1.19GB
nginx latest e4720093a3c1 2 months ago 187MB
pepinouz/targo-frontend latest bd231e939a94 2 months ago 1.14GB
postgres 12-alpine db2247d3e23c 2 months ago 234MB
ghcr.io/goauthentik/server 2023.10.7 965799e00878 3 months ago 670MB
ghcr.io/goauthentik/ldap 2023.10.7 452fae8abd02 3 months ago 37.8MB
redis alpine 435993df2c8d 3 months ago 41MB
portainer/portainer-ce latest 1a0fb356ea35 4 months ago 294MB
node lts-alpine3.16 725b4a5d6f69 12 months ago 173MB
nginx 1.17.5-alpine b6753551581f 4 years ago 21.4MB
mongo 4.0.8 394204d45d87 5 years ago 410MB```
thanks for the output.
Right, so ospfwatcher-watcher:latest
exists, could you please share the output of this command?
docker run -it --network=topolograph_backend --env-file=./.env -v ./logstash/index_template/create.py:/home/watcher/watcher/create.py ospfwatcher-watcher:latest python create.py
It runs create.py
inside ospfwatcher-watcher:latest
container for ELK index creation.
ospfwatcher % docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
logstash ospfwatcher-logstash "/usr/local/bin/dock…" logstash About a minute ago Up About a minute (healthy) 5044/tcp, 9600/tcp
quagga quagga:1.0 "/sbin/tini -- /usr/…" quagga About a minute ago Up About a minute
watcher ospfwatcher-watcher "python pytail.py" watcher About a minute ago Up About a minute
logstash
logs % docker logs logstash
. You should see the following output
"graph_time" => "30Apr2024_23h02m06s_7_hosts",
"event_detected_by" => "10.1.123.24",
"subnet_type" => "transit",
"@metadata" => {
"z_item_value" => "OSPF link cost changed between:10.1.123.24-10.1.123.23_10.1.1.2_10.1.123.24, old:10, new:777, detected by:10.1.123.24",
"mongo_collection_name" => "cost_change",
"host" => "003e2df97b53",
"mongo_id" => "output_mongo_cost",
"elasticsearch_index" => "watcher-costs-changes",
"zabbix_host" => "ospf_link_cost_change",
"webhook_item_value" => "OSPF link cost changed between:10.1.123.24-10.1.123.23_10.1.1.2_10.1.123.24, old:10, new:777, detected by:10.1.123.24",
"z_object_item_name" => "ospf_link_cost_change",
"zabbix_server_host" => "192.168.0.73",
"path" => "/home/watcher/watcher/logs/watcher.log"
},
"message" => "2024-04-30T23:02:11Z,demo-watcher,metric,10.1.123.24,changed,old_cost:10,new_cost:777,10.1.123.24,transit,10.1.123.23_10.1.1.2_10.1.123.24,30Apr2024_23h02m06s_7_hosts",
"watcher_name" => "demo-watcher",
"path" => "/home/watcher/watcher/logs/watcher.log",
"watcher_time" => "2024-04-30T23:02:11Z",
"@timestamp" => 2024-04-30T23:02:11.352Z,
"old_cost" => "10",
"@version" => "1",
"host" => "003e2df97b53",
"event_name" => "metric",
"shared_subnet_remote_neighbors_ids" => "10.1.123.23_10.1.1.2_10.1.123.24",
"new_cost" => "777",
"event_object" => "10.1.123.24",
"event_status" => "changed"
}
I also added healthcheck
for logstash
changes diff. Please update your docker-compose file appropriately.
Here is the output for the command. For elk stack, security is false and to basic instead of trial. I can confirm the user and user password are set correctly.
ELASTIC_IP:10.5.14.111
{'error': {'root_cause': [{'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-updown-events]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}], 'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-updown-events]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}, 'status': 401}
********** Error **********
The script was not able to create Index Templates because it couldn't authenticate in ELK. In most cases xpack.security.enabled: true is a reason, because it requires certificate of ELK.
********** Error **********
{'error': {'root_cause': [{'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-costs-changes]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}], 'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-costs-changes]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}, 'status': 401}
********** Error **********
The script was not able to create Index Templates because it couldn't authenticate in ELK. In most cases xpack.security.enabled: true is a reason, because it requires certificate of ELK.
********** Error **********```
@Vadims06
When checking, logstash was not running from ospfwatcher directory. docker compose up -d and now it started properly. docker-elk also has its own logstash. I changed a conflicting port but the output from docker compose ps shows that both logstash use identical ports.
Conflicting port was 50000 when building elk stack, changed to 50001.
After a few minutes, the ospfwatcher_logstash disapeared again. logstash from docker-elk is overwriting the one from ospfwatcher it seems, or the docker is crashing after a few minutes. I will revert to using external elk stack instead of docker
WARN[0000] The "EXPORT_TO_ZABBIX_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] network internal: network.external.name is deprecated. Please set network.name with external: true
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
quagga quagga:1.0 "/sbin/tini -- /usr/…" quagga 21 hours ago Up 21 hours
watcher ospfwatcher-watcher "python pytail.py" watcher 21 hours ago Up 21 hours
gotar@clo-wn-01:~/ospfwatcher$ sudo docker compose up -d
[sudo] password for gotar:
WARN[0000] The "EXPORT_TO_ZABBIX_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] network internal: network.external.name is deprecated. Please set network.name with external: true
[+] Running 4/4
⠿ Container quagga Started 10.6s
⠿ Container logstash-index-creator Started 10.9s
⠿ Container logstash Started 11.5s
⠿ Container watcher Started 11.6s
gotar@clo-wn-01:~/ospfwatcher$ sudo docker compose ps
WARN[0000] The "EXPORT_TO_ZABBIX_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] network internal: network.external.name is deprecated. Please set network.name with external: true
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
logstash ospfwatcher-logstash "/usr/local/bin/dock…" logstash 14 seconds ago Up 2 seconds 5044/tcp, 9600/tcp
quagga quagga:1.0 "/sbin/tini -- /usr/…" quagga 14 seconds ago Up 3 seconds
watcher ospfwatcher-watcher "python pytail.py" watcher 14 seconds ago Up 2 seconds
gotar@clo-wn-01:~/ospfwatcher$ cd ..
gotar@clo-wn-01:~$ cd docker-elk/
gotar@clo-wn-01:~/docker-elk$ sudo docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-elk-elasticsearch-1 docker-elk-elasticsearch "/bin/tini -- /usr/l…" elasticsearch 22 hours ago Up 22 hours 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp
docker-elk-kibana-1 docker-elk-kibana "/bin/tini -- /usr/l…" kibana 22 hours ago Up 22 hours 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp
docker-elk-logstash-1 docker-elk-logstash "/usr/local/bin/dock…" logstash 24 minutes ago Up 23 minutes 0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, :::9600->9600/tcp, 0.0.0.0:50001->50001/tcp, :::50001->50001/tcp, 0.0.0.0:50001->50001/udp, :::50001->50001/udp```
@Vadims06
I get this error with the new healthcheck change when trying to rebuild, down or up. Copy pasted the text directly from the main repo file
services.logstash.healthcheck Additional property start_interval is not allowed
@OtoKiroo could you please share ELK and docker-compose version what you are currently using? I will try to setup the same environment
@Vadims06
Docker compose version 2.16.0, using portainer as GUI, but i am doing everything through CLI. Docker-Elk i am using the latest version (8.13.2), cloned latest repo and build from it. Same for ospfwatcher and topolograph.
Here are the logs for logstash that keeps crashing. Even with elk-stack removed, it still happens. Status | Stopped for 4 days with exit code 123
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-05-02T19:48:42,354][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-05-02T19:48:42,366][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.17.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-x86_64]"}
[2024-05-02T19:48:42,369][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2024-05-02T19:48:42,420][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-05-02T19:48:42,437][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-05-02T19:48:42,962][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"1fe8c449-b431-41d9-8d02-7d7a0139a156", :path=>"/usr/share/logstash/data/uuid"}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/sinatra-2.2.1/lib/sinatra/base.rb:931: warning: constant Tilt::Cache is deprecated
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
TimerTask timeouts are now ignored as these were not able to be implemented correctly
[2024-05-02T19:48:43,701][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
warning: thread "puma reactor (Ruby-0-Thread-4@puma reactor: :1)" terminated with exception (report_on_exception is true):
java.lang.NoSuchMethodError: 'void org.jruby.RubyThread.beforeBlockingCall(org.jruby.runtime.ThreadContext)'
at org.nio4r.Selector.doSelect(Selector.java:237)
at org.nio4r.Selector.select(Selector.java:197)
at org.nio4r.Selector$INVOKER$i$select.call(Selector$INVOKER$i$select.gen)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneBlock.call(JavaMethod.java:577)
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:197)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:75)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:69)
at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80)
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70)
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207)
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$block$run$1(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:39)
at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138)
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)
at org.jruby.runtime.Block.call(Block.java:139)
at org.jruby.RubyProc.call(RubyProc.java:318)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)
at java.base/java.lang.Thread.run(Thread.java:829)
[2024-05-02T19:48:44,786][FATAL][org.logstash.Logstash ]
java.lang.NoSuchMethodError: 'void org.jruby.RubyThread.beforeBlockingCall(org.jruby.runtime.ThreadContext)'
at org.nio4r.Selector.doSelect(Selector.java:237) ~[nio4r_ext.jar:?]
at org.nio4r.Selector.select(Selector.java:197) ~[nio4r_ext.jar:?]
at org.nio4r.Selector$INVOKER$i$select.call(Selector$INVOKER$i$select.gen) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneBlock.call(JavaMethod.java:577) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:197) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:75) ~[?:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$method$select_loop$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:69) ~[?:?]
at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.puma_minus_5_dot_6_dot_8_minus_java.lib.puma.reactor.RUBY$block$run$1(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-5.6.8-java/lib/puma/reactor.rb:39) ~[?:?]
at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.runtime.Block.call(Block.java:139) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.RubyProc.call(RubyProc.java:318) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) ~[jruby-complete-9.2.20.1.jar:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]```
@OtoKiroo
I pulled 8.13.2 ELK and this is how index templates looked like before running ospf-watcher
then I started ospf-watcher
docker-compose up -d
WARN[0000] The "EXPORT_TO_WEBHOOK_URL_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] The "EXPORT_TO_ELASTICSEARCH_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] The "EXPORT_TO_ZABBIX_BOOL" variable is not set. Defaulting to a blank string.
WARN[0000] networks.internal: external.name is deprecated. Please set name and external: true
[+] Running 4/4
✔ Container quagga Started 0.2s
✔ Container logstash-index-creator Started 0.4s
✔ Container logstash Healthy 11.1s
✔ Container watcher Started
Here is a log of index-creator
ospfwatcher % docker logs logstash-index-creator
ELASTIC_IP:192.168.0.73
{'acknowledged': True}
{'acknowledged': True}
Index templates with OSPF Watcher indexes List of docker processes
% docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4778cc1933bf ospfwatcher-watcher "python pytail.py" 21 seconds ago Up 10 seconds watcher
ecf7f7ca1238 ospfwatcher-logstash "/usr/local/bin/dock…" 21 seconds ago Up 21 seconds (healthy) 5044/tcp, 9600/tcp logstash
73ee918c4b0e quagga:1.0 "/sbin/tini -- /usr/…" 22 seconds ago Up 21 seconds quagga
784823cbb4ab docker-elk-logstash "/usr/local/bin/dock…" 6 minutes ago Up 6 minutes 0.0.0.0:5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:50000->50000/tcp, 0.0.0.0:50000->50000/udp docker-elk-logstash-1
496ef4f284fa docker-elk-kibana "/bin/tini -- /usr/l…" 6 minutes ago Up 6 minutes 0.0.0.0:5601->5601/tcp docker-elk-kibana-1
dd5a7e95e6ea docker-elk-elasticsearch "/bin/tini -- /usr/l…" 6 minutes ago Up 6 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp docker-elk-elasticsearch-1
c6b70b711959 nginx:latest "/docker-entrypoint.…" 2 days ago Up 2 days 80/tcp, 0.0.0.0:8080->8079/tcp webserver
c4f9e562f141 vadims06/topolograph:latest "gunicorn -w 4 --bin…" 2 days ago Up 2 days 5000/tcp flask
416d81eda329 mongo:4.0.8 "docker-entrypoint.s…" 2 days ago Up 2 days 27017/tcp mongodb
@Vadims06 I see a bunch of these random containers as well. I will try to deploy a new docker with new version and test. Let me get back to you. My logstash container just crashes right away.
Logstash-index-creator logs:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='10.5.14.111', port=9200): Max retries exceeded with url: /_index_template/ospf-watcher-updown-events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f08ce6b1f70>: Failed to establish a new connection: [Errno 111] Connection refused'))
Here is the output for the command. For elk stack, security is false and to basic instead of trial. I can confirm the user and user password are set correctly.
ELASTIC_IP:10.5.14.111 {'error': {'root_cause': [{'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-updown-events]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}], 'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-updown-events]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}, 'status': 401} ********** Error ********** The script was not able to create Index Templates because it couldn't authenticate in ELK. In most cases xpack.security.enabled: true is a reason, because it requires certificate of ELK. ********** Error ********** {'error': {'root_cause': [{'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-costs-changes]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}], 'type': 'security_exception', 'reason': "unable to authenticate user [{'elastic'}] for REST request [/_index_template/ospf-watcher-costs-changes]", 'header': {'WWW-Authenticate': ['Basic realm="security" charset="UTF-8"', 'ApiKey']}}, 'status': 401} ********** Error ********** The script was not able to create Index Templates because it couldn't authenticate in ELK. In most cases xpack.security.enabled: true is a reason, because it requires certificate of ELK. ********** Error **********```
There is two general advices how to behave during https://discuss.elastic.co/t/unable-to-authenticate-user-for-rest-request/197461, please also check it.
Logstash-index-creator logs: requests.exceptions.ConnectionError: HTTPConnectionPool(host='10.5.14.111', port=9200): Max retries exceeded with url: /_index_template/ospf-watcher-updown-events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f08ce6b1f70>: Failed to establish a new connection: [Errno 111] Connection refused'))
Connection refused
means that you were not able to connect. Please run ospf-watcher
and check connection
ospfwatcher % docker run -it --network=topolograph_backend --env-file=./.env -v ./logstash/index_template/create.py:/home/watcher/watcher/create.py ospfwatcher-watcher:latest /bin/bash
Inside OSPF Watcher
root@c253892a18e5:/home/watcher/watcher#
Run apt-get update
&& apt-get install telnet
and check connection to ELK using env variables
@Vadims06 Healthcheck is a newer feature supported only in docker-compose 2.20.2 + from what i can read, which is why i cannot build using my current docker-compose version (2.16). I will update the version overnight and let you know tomarrow. We have many dockers running in production that cannot be affected during operating hours.
Hi @OtoKiroo do you have any updates? we can have a session to speed up the troubleshooting if you wish
@Vadims06 Hi, sorry for the late reply.. i've been very busy with other projects in the past weeks.
Ive updated docker compose(Go version) to the most recent 2.27.0 version. I dont use the older V1 docker-compose (Python), which you seem to use.
I am re-building from scratch on a new docker environment. So far i am getting this error:
WARN[0000] /home/gotar/topolograph-docker/docker-compose.yml: `version` is obsolete
WARN[0000] /home/gotar/ospfwatcher/docker-compose.yml: `version` is obsolete
This is because: Docker has deprecated the use of version in the root level of the docker compose yaml beginning with Docker Compose 1.27+.
I will working on it this week. I will keep you updated if anything more shows up. Removing "version" from the docker-compose.yaml fixes this issue.
@Vadims06 Does the webhook URL export only support SLACK format ? It does not seem to work with discord, google chat webhook URL's.
[2024-05-28T15:28:32,696][ERROR][logstash.outputs.http ][main][4f7d59c7e25efae22bb91cd73f45fb65da4c5bcf586d88fd78e49d38accd461c] [HTTP Output Failure] Encountered non-2xx HTTP code 400 {:response_code=>400, :url=>"https://discord.com/api/webhooks/1234887856082653245/6D3HQwXEa8kDaY9xM6VAwBHWG8xrGXFO-bMjjSzuafPoACRbKGZGLkQdh6UAX-orstxr", :event=>#<LogStash::Event:0x754f86a4>}
As for the rest, i am still getting no logs to elk stack, with unauthorized error using external elk-stack. Will try with docker elk-stack again and will keep you informed
@Vadims06 So ive finally managed to make it work with local docker elk-stack, the index templates were created by logstash index creator, however, when creating the Data View, it returns The index pattern you entered doesn't match any data streams, indices, or index aliases.
even though the templates do exist.
sudo docker exec -it quagga cat /var/log/quagga/ospfd.log returns nothing.
I am still using test_mode, so no GRE tunnel has been configured yet.
Hi @OtoKiroo , thank you for your updates. I will return back to your request a little bit later, have really busy week... Unfortunately, I haven't worked with discord, so can't be helpful here, but let's focus on making ELK + OSPF Watcher works
Hi @OtoKiroo ,
I took some time to rebuild OSPF Watcher and add sample test logs into it, please follow the new version of Readme and set TEST_MODE="False"
in .env
file. Also a quick check possible using containerlab, which is also included into the repo.
Hi, could you provide an example for the python script to accept ELK certificate? I am already running an instance of the ELK stack through WAZUH on a VM. I tried removing the authentication and using HTTP but the GUI used for ELK is not accessible using http, however the script works fine and gives no errors with no auth.
topolograph and ospfwatcher are running on seperate dockers, while the ELK stack is running on a VM
requests.exceptions.SSLError: HTTPSConnectionPool(host='10.100.0.10', port=9200): Max retries exceeded with url: /_index_template/ospf-watcher-updown-events (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))
I am not an expert with python or programming in general,
Thanks !