Closed cybergoof closed 5 years ago
I also noticed that there were differences in how the paths were written. So I changed everything to /opt/vulnwhisperer/vulnwhipserer.ini and /opt/vulnwhisperer/data/
I am doing a first test out-of-the-box of the new docker-compose but I have encountered several issues:
:+1: Kibana suddenly got an error, crashed and exited; this happens as soon as ElasticSearch is up and Kibana is able to do the connection to it in order to retrieve the .kibana
config file from it:
kibana | FATAL Error: Index .kibana belongs to a version of Kibana that cannot be automatically migrated. Reset it or use the X-Pack upgrade assistant.
Note: Kibana presents this error EVEN when running the docker-compose commenting out both vulnwhisperer container and volumes mounted on Logstash. This seems to be a problem with the container image.
:+1: Logstash is breaking with the RabbitQ config files due to connection errors. Solution: we should either exclude/delete those ones from the pipeline folder or do as we had in the older docker-compose, only importing the used files, as RabbitQ are always causing issues.
:+1: Logstash complaining of dead ES instance/Unreachable and trying to resurrect.
The reason why ElasticSearch is not appearing is because it needs to be changed on the logstash config files that the output host
is not localhost:9200
but actually elasticsearch:9200
; I think we should leave like that by default so that it works directly with the docker-compose file for those users testing it, or otherwise make a copy of the files into the docker
folder so that we have docker-compose fully working out of the box.
Solution: all logstash config files in the elk6
folder should be preconfigured with the ElasticSearch URL.
:+1: [Check Note2] ElasticSearch warning constantly appearing (not aware how this affects us):
elasticsearch | [2019-01-31T10:26:35,419][WARN ][o.e.x.w.WatcherService ] [3MnWV7h] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[false]
Reference: https://discuss.elastic.co/t/es-5-4-6-0-upgrade-watcher-issues/108481
Note: ElasticSearch presents this error EVEN when running the docker-compose commenting out both vulnwhisperer container and volumes mounted on Logstash. This seems to be a problem with the container image.
Note2: It seems this is related to the XPack trial license being expired and watcher
being a commercial plugin, so nothing that we should be worried.
[2019-01-31T16:07:55,597][WARN ][o.e.l.LicenseService] [3MnWV7h] LICENSE [EXPIRED] ON [THURSDAY, OCTOBER 11, 2018].
elasticsearch | # COMMERCIAL PLUGINS OPERATING WITH REDUCED FUNCTIONALITY
elasticsearch | # - watcher
elasticsearch | # - PUT / GET watch APIs are disabled, DELETE watch API continues to work
elasticsearch | # - Watches execute and write to the history
elasticsearch | # - The actions of the watches don't execute
:+1: Logstash doesn't seem be to able to index events in ElasticSearch:
logstash | [2019-01-31T10:50:35,273][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-vulnwhisperer-2017.02", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x12b717f6>], :response=>{"index"=>{"_index"=>"logstash-vulnwhisperer-2017.02", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [_default_]: [include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"[include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."}}}}}
Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html
VulnWhisperer seems to be working as expected downloading all the Nessus files, although is breaking with Qualys Vulnerability Management due to the issues on the qualysapi when the rollback was done, I need to fix that on Austin's fork.
Will check on the .kibana
file migration to the new ELK and see if I can make it work.
Edit: adding references to the bullet points in order to have all the info together for better troubleshooting.
Logstash complain regarding the dead ES instance isn't solved as easy as modifying the logstash config file apparently, as can be seen in these logs, it gets rewritten:
logstash | [2019-01-31T11:44:28,177][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch.local:9200"]}
logstash | [2019-01-31T11:44:28,207][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash | [2019-01-31T11:44:28,226][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
logstash | [2019-01-31T11:44:28,239][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash | [2019-01-31T11:44:28,296][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
Even if it says //elasticsearch:9200
, thats okay as it is done by Logstah itself, as you can see here with the localhost one:
logstash | [2019-01-31T11:51:53,931][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
I don't really know what is overwriting that...
Edit: My bad, I only changed the elasticsearch host in the Nessus Logstash config file, but the rest were being loaded as well, and the url was being overwritten.
okay, I will take a look at it. But I wont' have time over the next week. If there are anything you think needs to be changed, feel free to do it.
@cybergoof is there some reason why you were pulling the ELK 6.5.2 instead of the 6.6.0?
okay, I will take a look at it. But I wont' have time over the next week. If there are anything you think needs to be changed, feel free to do it.
will do, will be working on this until I make it work :) I am really invested into upgrading to ELK6 and finishing this before tackling other issues ^^
Port exposure problems have been resolved after upgrading docker ce from 18.06.1
to 18.09.1
, we should ask every user to make sure they are on the latest docker ce version.
The 1st issue .kibana index
was due to my lack of understanding of docker-compose: docker-compose was launching the latest ES container 6.6.0
but was loading the virtual volume stated at the end of the config file esdata1
; esdata1
is had been already created in older tests with ELK5.6 docker-compose, and was having loaded all data, including ElasticSearch's license (which caused also issue 4 in my case) and all the ElasticSearch saved and indexed data.
This data was saved localy at the path /var/lib/docker/volumes/vulnwhisperer_esdata1/
.
We will need to document this in case this docker-compose is lauched where the older one was, as the users would be having the same issues. If the users want to keep the old database, they will need to do the upgrade of the structures through the 5.6.2 Kibana, and then start using the ELK6 docker-compose version.
In order to delete the existing volumes, it is needed to run the following command: docker volume prune -f
.
After doing that and launching the ELK6 docker-compose, point 5 seems also to be solved, as ElasticSearch 6.6.0 appears to be updating all of the mappings:
elasticsearch | [2019-02-01T09:58:30,067][INFO ][o.e.c.m.MetaDataMappingService] [JeYPcwx] [logstash-vulnwhisperer-2018.01/Z3YJqDBCRySChW1r25hGfg] update_mapping [doc]
After testing with the volumes pruned, logstash seems to still show a warning as per this:
logstash | [2019-02-01T10:37:05,163][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
and when Kibana is launched, it gives an error an exits:
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]
which I believe it is related to lack of resources (will upgrade VM RAM and change ElasticSearch's - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
to bigger numbers)
I think this will let you run version 6 of ELK stacks. All the config files are in 'elk6'. Run by
When you run kibana, load the kibana dashboard file in elk6/kibana.json
I did make some changes. I changed the container names to remove the "vuln". Logstash, Vulnwhisperer and kibana all came up.
However, I could not test with nessus. Can someone who has a dev environment working please give it a shot?