robcowart / elastiflow

Network flow analytics (Netflow, sFlow and IPFIX) with the Elastic Stack
Other
2.48k stars 592 forks source link

Pipeline not creating in ElasticSearch #188

Closed MyCodeRocks closed 5 years ago

MyCodeRocks commented 5 years ago

Hi,

Thought I would try everything before coming here. I have tried everything I can. The issue is simple the pipeline is just not being created in ElasticSearch.

  1. Machine is a Xeon CPU, Memory is 124gig, Storage 500gigs - Only runs ELK. It runs Surcaita logs off pfsense firewall into it.
  2. Each part of the stack (ELK) as 24gigs in the Java Heap set.
  3. Surcaita logs are flowing and have all my dashboards setup. Works well.
  4. ElasticFlow dir is in the Logstash dir with the .conf
  5. Updated and installed the modules successfully
  6. The heaps have more than enough memory.
  7. All directories access rights are correct - user / group logstash
  8. Service in /etc/systemd/system/logstash.service.d/elasticflow.conf - and is starting up
  9. I have the pipeline.yml edited with the elastiflow pipeline as per the documentation.
  10. On logstash startup I can see, Pipeline "Started", logstash/elastiflow configs loaded, Env Varibles being loaded, on logstash startup I see the port being allocated "successfully"

I have in env variables the host interface as the interface to get events from.

I have added the UDP (and under desprate config even TCP) port that I specified on the firewall opened. The only [ERROR} that I can see in the logs for logstash (debug setting)

[ERROR][logstash.inputs.udp ] UDP listener died {:exception=>#<Errno::EADDRNOTAVAIL: Cannot assign requested

I had thought I had this sorted when I added the port and UDP and TCP to open on the firewall...

-$ sudo lsof -nPi :6343

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 3171 logstash 270u IPv4 9381923 0t0 UDP 127.0.0.1:6343

(remember I am trying to get info on the actual host running logstash.

Added the UDP and TCP port to open to the firewall using: sudo ufw allow from 127.0.0.1 to any port 6343 proto udp sudo ufw allow from 127.0.0.1 to any port 6343 proto tcp

Not sure what else to do, pulling my hair out. The pipeline for Elastiflow wont be created in ElasticSearch whatsoever.

Happy to share logs and config where needed:

Here are some items I see in the logstash log with: grep -w elasticflow /var/log/logstash/logstash-plain.log [DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"}

Not sure how its "sleep" if its not even showing up in ElasticSearch...

:~$ sudo service logstash status ● logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/logstash.service.d └─elastiflow.conf Active: active (running) since Sun 2018-09-23 18:34:22 SAST; 1h 44min ago Main PID: 3171 (java) Tasks: 95 (limit: 4915) CGroup: /system.slice/logstash.service

Things I noticed ----- INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"My_Ip_Addy_I_Chose:6343"} [2018-09-23T20:01:55,808][WARN ][logstash.inputs.udp ] Unable to set receive_buffer_bytes to desired size. Requested 33554432 but obtained 212992 bytes.

[DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"}

[org.logstash.ackedqueue.Queue] opening head page: 0, in: /var/lib/logstash/queue/elastiflow, with checkpoint: pageNum=0, firstUnackedPageNum=0, firstUnackedSeqNum=0, minSeqNum=0, elementCount=0, isFullyAcked=no [2018-09-23T18:38:59,151][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"elastiflow", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-09-23T18:38:59,406][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/etc/logstash/elastiflow/templates/elastiflow.template.json"} [2018-09-23T18:38:59,516][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/elastiflow-3.3.0 [2018-09-23T18:39:00,237][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/etc/logstash/elastiflow/geoipdbs/GeoLite2-City.mmdb"} [2018-09-23T18:39:00,239][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/etc/logstash/elastiflow/geoipdbs/GeoLite2-ASN.mmdb"} [2018-09-23T18:39:11,822][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/etc/logstash/elastiflow/geoipdbs/GeoLite2-City.mmdb"} [2018-09-23T18:39:11,823][INFO ][logstash.filters.geoip ] Using geoip database {:path=>"/etc/logstash/elastiflow/geoipdbs/GeoLite2-ASN.mmdb"} [2018-09-23T18:39:23,893][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"} [2018-09-23T18:39:24,133][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:elastiflow, :main], :non_running_pipelines=>[]} [2018-09-23T18:39:28,885][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"}

Ummm there are not pipelines in ElasticSearch

Then I can see logstash loading the Env variables So it is reading the configs great

[2018-09-23T18:34:47,140][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_ipfix_ipv4.logstash.conf"} [2018-09-23T18:34:47,142][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_netflow_ipv4.logstash.conf"} [2018-09-23T18:34:47,142][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_sflow_ipv4.logstash.conf"} [2018-09-23T18:34:47,143][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_10_begin.logstash.conf"} [2018-09-23T18:34:47,144][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_20_netflow.logstash.conf"} [2018-09-23T18:34:47,145][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_30_ipfix.logstash.conf"} [2018-09-23T18:34:47,146][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_40_sflow.logstash.conf"} [2018-09-23T18:34:47,147][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_90_post_process.logstash.conf"} [2018-09-23T18:34:47,149][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/30_output_10_single.logstash.conf"}

It supposedly creating a pipeline (which it doesn't) [2018-09-23T18:34:47,202][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:elastiflow}

After that its reading env variables: [2018-09-23T18:36:51,434][DEBUG][logstash.filters.translate] Replacing ${ELASTIFLOW_DICT_PATH:/etc/logstash/elastiflow/dictionaries} with actual value

Its supposedly starting the pipeline:

[2018-09-23T18:38:58,920][DEBUG][org.logstash.ackedqueue.Queue] opening head page: 0, in: /var/lib/logstash/queue/elastiflow, with checkpoint: pageNum=0, firstUnackedPageNum=0, firstUnackedSeqNum=0, minSeqNum=0, elementCount=0, isFullyAcked=no [2018-09-23T18:38:59,151][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"elastiflow", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}

Says its loading a elastic search template .... [2018-09-23T18:38:59,406][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/etc/logstash/elastiflow/templates/elastiflow.template.json"} [2018-09-23T18:38:59,516][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/elastiflow-3.3.0

Um says started -- HELLLL NO it hasn't [2018-09-23T18:39:23,893][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"} [2018-09-23T18:39:24,133][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:elastiflow, :main], :non_running_pipelines=>[]} [2018-09-23T18:39:28,885][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x74514322@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"}

---- Nothing in ElasticSearch that even resembles a ElastiFlow pipeline....

In the ElasticSearch logs it shows for: grep -w elastiflow /var/log/elasticsearch/elasticsearch.log

[2018-09-23T01:06:06,187][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T01:16:03,695][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T08:20:32,767][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T08:48:44,296][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T09:36:31,490][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T17:33:26,804][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-] [2018-09-23T18:38:59,620][INFO ][o.e.c.m.MetaDataIndexTemplateService] [EVTk5hQ] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-*]


So what am I missing --- or have I looked at this to log that I am not seeing what is in front of me

**** for completeness going through the logs here is the logstash startup (again the phantom pipline "Created"

[2018-09-23T18:34:46,164][DEBUG][logstash.runner ] --------------- Logstash Settings ------------------- [2018-09-23T18:34:46,224][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"} [2018-09-23T18:34:46,306][DEBUG][logstash.agent ] Setting global FieldReference parsing mode: COMPAT [2018-09-23T18:34:46,333][DEBUG][logstash.agent ] Setting up metric collection [2018-09-23T18:34:46,422][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120} [2018-09-23T18:34:46,519][DEBUG][logstash.instrument.periodicpoller.cgroup] Error, cannot retrieve cgroups information {:exception=>"NoMethodError", :message=>"undefined method `[]' for nil:NilClass"} [2018-09-23T18:34:46,707][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120} [2018-09-23T18:34:46,859][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"} [2018-09-23T18:34:46,867][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"} [2018-09-23T18:34:46,889][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120} [2018-09-23T18:34:46,901][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120} [2018-09-23T18:34:46,956][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.1"} [2018-09-23T18:34:46,975][DEBUG][logstash.agent ] Starting agent [2018-09-23T18:34:47,023][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"} [2018-09-23T18:34:47,112][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/conf.d/30-outputs.conf.save"]} [2018-09-23T18:34:47,115][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/01-inputs.conf"} [2018-09-23T18:34:47,125][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/10-pfsense-filter.conf"} [2018-09-23T18:34:47,127][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/30-outputs.conf"} [2018-09-23T18:34:47,140][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/etc/logstash/elastiflow/conf.d/10_input_ipfix_ipv6.logstash.conf.disabled", "/etc/logstash/elastiflow/conf.d/10_input_netflow_ipv6.logstash.conf.disabled", "/etc/logstash/elastiflow/conf.d/10_input_sflow_ipv6.logstash.conf.disabled", "/etc/logstash/elastiflow/conf.d/30_output_20_multi.logstash.conf.disabled"]} [2018-09-23T18:34:47,140][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_ipfix_ipv4.logstash.conf"} [2018-09-23T18:34:47,142][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_netflow_ipv4.logstash.conf"} [2018-09-23T18:34:47,142][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/10_input_sflow_ipv4.logstash.conf"} [2018-09-23T18:34:47,143][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_10_begin.logstash.conf"} [2018-09-23T18:34:47,144][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_20_netflow.logstash.conf"} [2018-09-23T18:34:47,145][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_30_ipfix.logstash.conf"} [2018-09-23T18:34:47,146][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_40_sflow.logstash.conf"} [2018-09-23T18:34:47,147][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/20_filter_90_post_process.logstash.conf"} [2018-09-23T18:34:47,149][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/elastiflow/conf.d/30_output_10_single.logstash.conf"} [2018-09-23T18:34:47,192][DEBUG][logstash.agent ] Converging pipelines state {:actions_count=>2} [2018-09-23T18:34:47,203][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main} [2018-09-23T18:34:47,202][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:elastiflow}****

robcowart commented 5 years ago
[2018-09-23T18:39:24,133][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:elastiflow, :main], :non_running_pipelines=>[]}

The above message means that the Pipeline started successfully.

When you say Nothing in ElasticSearch that even resembles a ElastiFlow pipeline I am not sure what you mean. Are you saying that you see no data and you believe that you should?

MyCodeRocks commented 5 years ago
[2018-09-23T18:39:24,133][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:elastiflow, :main], :non_running_pipelines=>[]}

The above message means that the Pipeline started successfully.

When you say Nothing in ElasticSearch that even resembles a ElastiFlow pipeline I am not sure what you mean. Are you saying that you see no data and you believe that you should?

Thank you for the reply, so after trolling every log at debug log level I eventually found (excuse the long log) [2018-09-26T12:32:08,848][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:elastiflow, :exception=>"Java::JavaNet::URISyntaxException", :message=>"Expected scheme name at index 0: :127.0.0.1:9200", :backtrace=>["java.net.URI$Parser.fail(java/net/URI.java:2848)", "java.net.URI$Parser.failExpecting(java/net/URI.java:2854)", "java.net.URI$Parser.parse(java/net/URI.java:3046)", "java.net.URI.<init>(java/net/URI.java:588)", "java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)", "org.jruby.javasupport.JavaConstructor.newInstanceDirect(org/jruby/javasupport/JavaConstructor.java:278)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1001)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.util.safe_uri.initialize(/usr/share/logstash/logstash-core/lib/logstash/util/safe_uri.rb:21)", "usr.share.logstash.logstash_minus_core.lib.logstash.util.safe_uri.RUBY$method$initialize$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash/util//usr/share/logstash/logstash-core/lib/logstash/util/safe_uri.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.validate_value(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:513)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.block in process_parameter_value(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:328)", "org.jruby.RubyArray.collect(org/jruby/RubyArray.java:2472)", "org.jruby.RubyArray.map(org/jruby/RubyArray.java:2486)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.process_parameter_value(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:328)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.block in validate_check_parameter_values(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:351)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1734)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.block in validate_check_parameter_values(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:345)", "org.jruby.RubyHash$12.visit(org/jruby/RubyHash.java:1362)", "org.jruby.RubyHash$12.visit(org/jruby/RubyHash.java:1359)", "org.jruby.RubyHash.visitLimited(org/jruby/RubyHash.java:662)", "org.jruby.RubyHash.visitAll(org/jruby/RubyHash.java:647)", "org.jruby.RubyHash.iteratorVisitAll(org/jruby/RubyHash.java:1319)", "org.jruby.RubyHash.each_pairCommon(org/jruby/RubyHash.java:1354)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1343)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.validate_check_parameter_values(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:344)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.validate(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:234)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.config_init(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:85)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.RUBY$method$config_init$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash/config//usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.outputs.base.initialize(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:60)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1001)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.RubyClass.finvoke(org/jruby/RubyClass.java:908)", "org.jruby.RubyBasicObject.callMethod(org/jruby/RubyBasicObject.java:363)", "org.logstash.config.ir.compiler.OutputStrategyExt$SimpleAbstractOutputStrategyExt.initialize(org/logstash/config/ir/compiler/OutputStrategyExt.java:224)", "org.logstash.config.ir.compiler.OutputStrategyExt$SimpleAbstractOutputStrategyExt$INVOKER$i$1$0$initialize.call(org/logstash/config/ir/compiler/OutputStrategyExt$SimpleAbstractOutputStrategyExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1022)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:48)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:30)", "org.logstash.plugins.PluginFactoryExt$Plugins.plugin(org/logstash/plugins/PluginFactoryExt.java:217)", "org.logstash.plugins.PluginFactoryExt$Plugins.plugin(org/logstash/plugins/PluginFactoryExt.java:166)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.plugin(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:71)", "RUBY.<eval>((eval):16633)", "org.jruby.RubyKernel.evalCommon(org/jruby/RubyKernel.java:1027)", "org.jruby.RubyKernel.eval(org/jruby/RubyKernel.java:994)", "org.jruby.RubyKernel$INVOKER$s$0$3$eval19.call(org/jruby/RubyKernel$INVOKER$s$0$3$eval19.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:49)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.block in converge_state(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)", "java.lang.Thread.run(java/lang/Thread.java:748)"]} After that interesting:

[2018-09-26T12:32:08,877][TRACE][logstash.agent ] Converge results {:success=>false, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]} [2018-09-26T12:32:09,005][DEBUG][logstash.agent ] Starting puma [2018-09-26T12:32:09,074][DEBUG][logstash.agent ] Trying to start WebServer {:port=>10600} [2018-09-26T12:32:09,107][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handleJava::JavaNet::URISyntaxExceptionforPipelineAction::Create>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:103:increate'", "org/logstash/execution/ConvergeResultExt.java:34:in add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:321:inblock in converge_state'"]} [2018-09-26T12:32:09,121][DEBUG][logstash.filters.json ] Running json filter {:event=>#}'

Now not sure why its not getting an index been following the setup manual religiously

I am seeing a lot of: '[DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore'

So in my mind (and could be wrong) - it is not able to read the values in the config and therefore its not able to contact or return them to specify the index?

After this is keeps restarting logstash as I have restart on failure set. This is ELK stack 6.4.1 Would appreciate the guidence

robcowart commented 5 years ago

Did you change the default port that Logstash's REST API listens on? The default is 9600. That is all I can think that the port=>10600 refers to. Perhaps try changing that back.

MyCodeRocks commented 5 years ago

Thank you once again for the time and response! Yes I did change the port as I was trying to debug the crash. One of the items I was reading specified a conflict in ports. Let me change it back and revert back to you in 10mins

MyCodeRocks commented 5 years ago

Ok now I remember why - if I run it on the normal port 9600 etc it then does: '[ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit'

After reading up this was attributed to logstash clashing with another instance on similar port - so i changed the port.

However as above if I do that I get: `[2018-09-26T12:32:08,877][TRACE][logstash.agent ] Converge results {:success=>false, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]} [2018-09-26T12:32:09,005][DEBUG][logstash.agent ] Starting puma [2018-09-26T12:32:09,074][DEBUG][logstash.agent ] Trying to start WebServer {:port=>10600} [2018-09-26T12:32:09,107][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handleJava::JavaNet::URISyntaxExceptionforPipelineAction::Create>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:103:increate'", "org/logstash/execution/ConvergeResultExt.java:34:in add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:321:inblock in converge_state'"]} [2018-09-26T12:32:09,121][DEBUG][logstash.filters.json ] Running json filter {:event=>#LogStash::Event:0x37b5654f}'

As soon as I change the port it works. Maybe I should point it to the 10xxx port I sent what I mean by it is ElasticFlow?

robcowart commented 5 years ago

Are you running multiple instances of Logstash on a single box?

MyCodeRocks commented 5 years ago

No sir just one full stack ElasticSearch Kibana Logstash

MyCodeRocks commented 5 years ago

Ok think we getting closer. Now that the ports are back to normal error changed: ][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:elastiflow, :exception=>"Java::OrgLogstashSecretStore::SecretStoreException::AccessException", :message=>"Can not access Logstash keystore at /etc/logstash/logstash.keystore. Please verify correct file permissions and keystore password.", :backtrace=>["org.logstash.secret.store.backend.JavaKeyStore.load(org/logstash/secret/store/backend/JavaKeyStore.java:262)", "org.logstash.secret.store.backend.JavaKeyStore.load(org/logstash/secret/store/backend/JavaKeyStore.java:40)", "org.logstash.secret.store.SecretStoreFactory.doIt(org/logstash/secret/store/SecretStoreFactory.java:107)", "org.logstash.secret.store.SecretStoreFactory.load(org/logstash/secret/store/SecretStoreFactory.java:93)", "org.logstash.secret.store.SecretStoreExt.getIfExists(org/logstash/secret/store/SecretStoreExt.java:37)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:423)", "org.jruby.javasupport.JavaMethod.invokeStaticDirect(org/jruby/javasupport/JavaMethod.java:355)", "usr.share.logstash.logstash_minus_core.lib.logstash.util.substitution_variables.block in replace_placeholders(/usr/share/logstash/logstash-core/lib/logstash/util/substitution_variables.rb:45)", "org.jruby.RubyString.gsubCommon19(org/jruby/RubyString.java:2629)", "org.jruby.RubyString.gsubCommon19(org/jruby/RubyString.java:2583)", "org.jruby.RubyString.gsub(org/jruby/RubyString.java:2541)", "usr.share.logstash.logstash_minus_core.lib.logstash.util.substitution_variables.replace_placeholders(/usr/share/logstash/logstash-core/lib/logstash/util/substitution_variables.rb:35)", "usr.share.logstash.logstash_minus_core.lib.logstash.util.substitution_variables.deep_replace(/usr/share/logstash/logstash-core/lib/logstash/util/substitution_variables.rb:23)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.block in config_init(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:82)", "org.jruby.RubyHash$12.visit(org/jruby/RubyHash.java:1362)", "org.jruby.RubyHash$12.visit(org/jruby/RubyHash.java:1359)", "org.jruby.RubyHash.visitLimited(org/jruby/RubyHash.java:662)", "org.jruby.RubyHash.visitAll(org/jruby/RubyHash.java:647)", "org.jruby.RubyHash.iteratorVisitAll(org/jruby/RubyHash.java:1319)", "org.jruby.RubyHash.each_pairCommon(org/jruby/RubyHash.java:1354)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1343)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.config_init(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:81)", "usr.share.logstash.logstash_minus_core.lib.logstash.config.mixin.RUBY$method$config_init$0$__VARARGS__(usr/share/logstash/logstash_minus_core/lib/logstash/config//usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.inputs.base.initialize(/usr/share/logstash/logstash-core/lib/logstash/inputs/base.rb:60)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_tcp_minus_5_dot_0_dot_9_minus_java.lib.logstash.inputs.tcp.initialize(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.0.9-java/lib/logstash/inputs/tcp.rb:119)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1001)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.RubyClass.finvoke(org/jruby/RubyClass.java:908)", "org.jruby.RubyBasicObject.callMethod(org/jruby/RubyBasicObject.java:363)", "org.logstash.plugins.PluginFactoryExt$Plugins.plugin(org/logstash/plugins/PluginFactoryExt.java:233)", "org.logstash.plugins.PluginFactoryExt$Plugins.plugin(org/logstash/plugins/PluginFactoryExt.java:166)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.plugin(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:71)", "RUBY.<eval>((eval):8)", "org.jruby.RubyKernel.evalCommon(org/jruby/RubyKernel.java:1027)", "org.jruby.RubyKernel.eval(org/jruby/RubyKernel.java:994)", "org.jruby.RubyKernel$INVOKER$s$0$3$eval19.call(org/jruby/RubyKernel$INVOKER$s$0$3$eval19.gen)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:49)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.initialize(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90)", "org.jruby.RubyClass.newInstance(org/jruby/RubyClass.java:1022)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen)", "RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.block in converge_state(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}

I will go check the keystore and update the conf files accordingly and revert back. Interesting its still exiting (logstash) but I am not getting that Webserver error.

MyCodeRocks commented 5 years ago

Ok so we making small steps - [ERROR] 2018-09-26 13:51:54.251 [main] secretstorecli - Can not find Logstash keystore at /usr/share/logstash/config/logstash.keystore. Please verify this file exists and is a valid Logstash keystore. {:cause=>nil, :backtrace=>["o

WHats interesting is that I created this in /etc/logstash and my /etc/default/logstash uses that path to the config directory.

Something somewhere is changing this, nothing in my logstash.yml that I can see

MyCodeRocks commented 5 years ago

Ok so after a week of working on this and numerous debugging - its 80% there.

  1. LogStash is running :) getting Suricata logs and I am still not sure what its doing with sflow :)
  2. All the filters are running and see the values loading now.

It all ended up to two things:

  1. Port Logstash was on for some reason (Maybe the OS was holding onto 9600) for some reason>
  2. The keystore got corupt somehow for both ElasticSearch and for Logstash. Had to eventually can both keystores and recreate them.

The good: A LOT of testing debugging and playing around really taught me about linux (👍 and the workings of Logstash.

The bad: This is what I think is missing and why I don't get any Elastiflow indexes (my understanding could be wrong and would appreciate the correction if so)

  1. When I install logstash-codec-sflow with bin/logstash-plugin install logstash-codec-sflow I get: Validating Installing Install Successful I then do a bin/logstash-plugin list --installed and all the plugins show including sflow. I do this while the logstash service is stopped (just thought this would be best) Startup the logstash service sudo service logstash start - It starts, tail the logs to see what is going on and see all but the sflow plugin listed as loaded plugins. Go back to bin/logstash-plugin list --installed and get Error: No plugins installed

Thats funny logstash just a moment ago you listed all the plugins..... Without the sflow plugin I am assuming (please advise) no data is flowing. Now data flowing - I pointed the host of IPV4 and sflow to the host machine with 0.0.0.0 in each of the env variables in the conf file. Basically what I wanted to test was if ElastiFlow was working by using the machine it is installed on to capture network traffic. I am still not getting anything resembling a index. Am I expecting the wrong thing and is what I explained above with 0.0.0.0 correct/incorrect?

MyCodeRocks commented 5 years ago

Following debugs again I see: [2018-09-26T21:38:37,979][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore [2018-09-26T21:38:39,873][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to load or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore [2018-09-26T21:38:41,704][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] retrieved secret urn:logstash:secret:v1:keystore.seed [2018-09-26T21:38:41,704][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] Using existing keystore at /etc/logstash/logstash.keystore [2018-09-26T21:38:41,763][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] requested secret urn:logstash:secret:v1:elastiflow_netflow_udp_queue_size not found

This isnt a production server so not to worried about the keystore info. But its returning a not found - this is mostly for the DNS queries and I would assume that this means it couldn't resolve for that IP address which is fine just checking and adding some debug info.

MyCodeRocks commented 5 years ago

Somemore info and getting interesting I see its more than just dns..... I am sure these values are set......

[2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@add_tag = [] [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@remove_tag = [] [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@add_field = {} [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@remove_field = [] [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@periodic_flush = false [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@override = false [2018-09-26T21:43:08,617][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@dictionary = {} [2018-09-26T21:43:08,618][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@refresh_interval = 300 [2018-09-26T21:43:08,618][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@exact = true [2018-09-26T21:43:08,618][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@regex = false [2018-09-26T21:43:08,620][DEBUG][logstash.filters.translate] Replacing${ELASTIFLOW_DICT_PATH:/etc/logstash/elastiflow/dictionaries}with actual value [2018-09-26T21:43:08,620][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore [2018-09-26T21:43:10,322][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to load or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore [2018-09-26T21:43:12,015][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x33cbec76@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"} [2018-09-26T21:43:12,172][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] retrieved secret urn:logstash:secret:v1:keystore.seed [2018-09-26T21:43:12,172][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] Using existing keystore at /etc/logstash/logstash.keystore [2018-09-26T21:43:12,216][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] requested secret urn:logstash:secret:v1:elastiflow_dict_path not found [2018-09-26T21:43:12,217][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@dictionary_path = "/etc/logstash/elastiflow/dictionaries/ip_rep_basic.yml" [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@destination = "[@metadata][src_rep_label]" [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@refresh_behaviour = "replace" [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@id = "elastiflow_public_src_rep_label" [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@field = "[flow][src_addr]" [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@enable_metric = true [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@add_tag = [] [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@remove_tag = [] [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@add_field = {} [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@remove_field = [] [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@periodic_flush = false [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@override = false [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@dictionary = {} [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@refresh_interval = 300 [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@exact = true [2018-09-26T21:43:12,218][DEBUG][logstash.filters.translate] config LogStash::Filters::Translate/@regex = false [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@init = "\n require 'csv'\n " [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@id = "elastiflow_public_src_rep_tags" [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@code = "\n event.set('[flow][src_rep_tags]', event.get('[@metadata][src_rep_label]').parse_csv)\n " [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@enable_metric = true [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@add_tag = [] [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@remove_tag = [] [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@add_field = {} [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@remove_field = [] [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@periodic_flush = false [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@script_params = {} [2018-09-26T21:43:12,221][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@tag_on_exception = "_rubyexception" [2018-09-26T21:43:12,223][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@id = "elastiflow_cleanup_geoip_fail_tag" [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@remove_tag = ["_geoip_lookup_failure"] [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@enable_metric = true [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@add_tag = [] [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@add_field = {} [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@remove_field = [] [2018-09-26T21:43:12,224][DEBUG][logstash.filters.mutate ] config LogStash::Filters::Mutate/@periodic_flush = false [2018-09-26T21:43:12,226][DEBUG][logstash.filters.ruby ] config LogStash::Filters::Ruby/@init = "\n require 'csv'\n

robcowart commented 5 years ago

I have no idea how you even got the keystore involved. I have never used or needed it for ElastiFlow.

MyCodeRocks commented 5 years ago

I have no idea how you even got the keystore involved. I have never used or needed it for ElastiFlow.

Not going to lie I have no idea either, never used it before on Docker containers I have run or other ELK stacks. I pulled this from the yum repo for elastic.io as per their documentation.

MyCodeRocks commented 5 years ago

The wonders of watching debug traces for days..... closer to my magical ElasticSearch [2018-09-26T22:06:47,885][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@0.0.0.0:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@0.0.0.0:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2018-09-26T22:06:48,475][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x250cf45e sleep>"}

Now because of lack of sleep and looking at this so dam long could you please put me out my missery and point me to which file and ENV this > "http://elastic:xxxxxx@0.0.0.0:9200/" sits in?

I haven't changed any settings for user and pass and don't think its required unless is the default on with changeme

MyCodeRocks commented 5 years ago

Finally "fixed" if I view in Discover in Kibana my Logstash-* index pattern I see some (not sure yet if all) of the fields from Elastiflow. On the debug can see the pipe flowing.

Two things (questions - Kibana related)

  1. I can't import the index pattern says: " Sorry there was an error Saved objects file format is invalid and cannot be imported" - that is by doing Kibana > Management > Saved Objects > Import > Index Pattern file in ElastiFlow>Kibana
  2. If I try create my own index pattern in Kibana with elastiflow-* nothing returns as a result and I can't continue to create...

Am I missing something - Dashboards all imported with no issue obviously don't work yet;

robcowart commented 5 years ago

If you see ElastiFlow fields in an index that starts with logstash-* something is wrong. The indices from ElastiFlow should be named elastiflow-VERSION-DATE, e.g. elastiflow-3.3.0-2018.09.27.

The index pattern is not imported via the Kibana UI. That is a new feature added to 6.4.x and the format it expects is different. The instructions explain that the Index Pattern is to be installed with a curl command.

The setup of ElastiFlow (or really anything with Elastic) is an exact process. The details matter. I know you said that you followed the install steps exactly, but trying to load the Index Pattern via the UI shows that you didn't. I would ask you to go back through everything step by step, slowly, and verify each tasks. There are literally 1000s of ElastiFlow users, and only a few have had this much trouble. Each time it is because the skipped something because they didn't think it was important. It really should be a 30 minute process at the most.

MyCodeRocks commented 5 years ago

If you see ElastiFlow fields in an index that starts with logstash-* something is wrong. The indices from ElastiFlow should be named elastiflow-VERSION-DATE, e.g. elastiflow-3.3.0-2018.09.27.

The index pattern is not imported via the Kibana UI. That is a new feature added to 6.4.x and the format it expects is different. The instructions explain that the Index Pattern is to be installed with a curl command.

The setup of ElastiFlow (or really anything with Elastic) is an exact process. The details matter. I know you said that you followed the install steps exactly, but trying to load the Index Pattern via the UI shows that you didn't. I would ask you to go back through everything step by step, slowly, and verify each tasks. There are literally 1000s of ElastiFlow users, and only a few have had this much trouble. Each time it is because the skipped something because they didn't think it was important. It really should be a 30 minute process at the most.

And I agree and think that is a fair comment. I must of done something wrong somewhere and will go back and check everything. Do I need to do anything (besides trash my entire install) to "Clean Up"

Just so we are clear from what I read its looks awesome hence why I was willing to invest a week into it.

robcowart commented 5 years ago

That is tough to say, as I don't know the overall state/history of the box. BTW, I reviewed above and didn't see any mention of the version of Linux you are using.

In some testing yesterday and today I am seeing issues with 6.4.1 on both Ubuntu and CentOS. I am going to do more testing over the next few days to learn more. You might want to standby until I have those results.

MyCodeRocks commented 5 years ago

That is tough to say, as I don't know the overall state/history of the box. BTW, I reviewed above and didn't see any mention of the version of Linux you are using.

In some testing yesterday and today I am seeing issues with 6.4.1 on both Ubuntu and CentOS. I am going to do more testing over the next few days to learn more. You might want to standby until I have those results.

Thank you for the response. Well that makes my debugging interesting: Machine 1 (currently with ElastiFlow) - Ubuntu 18.10 Machine 2 (Fresh machine just OS was going to install the ELK stack and Elastiflow from scratch today) - Centos 7.5

MyCodeRocks commented 5 years ago

p.s. - if you need any testing / debugging let me know more than happy to run for you on these two OS's

robcowart commented 5 years ago

Historically I have done most of my work on CentOS, which is also the OS Elastic uses for its Docker containers. That said, I suspect that it is not an OS-specific issue, rather an issue with Logstash. Elastic have been changing out some of the internals of Logstash, and I suspect that they may be breaking more things than they are fixing. I will be testing 6.1.3, 6.2.4, 6.3.2 and 6.4.1, and will report back on the results.

MyCodeRocks commented 5 years ago

Historically I have done most of my work on CentOS, which is also the OS Elastic uses for its Docker containers. That said, I suspect that it is not an OS-specific issue, rather an issue with Logstash. Elastic have been changing out some of the internals of Logstash, and I suspect that they may be breaking more things than they are fixing. I will be testing 6.1.3, 6.2.4, 6.3.2 and 6.4.1, and will report back on the results.

Didn't know they used CentOs for their dockers! Well if you need any user testing let me know. Going to try set it up on a fresh CentOs 7.5 machine.

MyCodeRocks commented 5 years ago

Just some feedback. Wasnt going to install Elastiflow on CentOs 7.5 as you said you were going to test it. I carried on grinding away at current install.

  1. Got rid of the keystore as it was causing the ENV not to be allocated values. No idea where it came from
  2. I can see the following: 1. the pipeline for elasticflow however its flat lining and never had anything come into it.
  3. I have imported the index pattern using CURL as you described.
  4. Have successfully imported the dash & vis

Here is where I am scratching my head

  1. There is no index for elastiflow not elastiflow-3.3.* nothing

Looking at the logstash debug trail I am seeing some data (maybe you could verify if this is actual fact data for the elastiflow pipe)

I see

[2018-09-27T23:15:08,111][DEBUG][logstash.pipeline        ] millis"=>0, "events_out"=>0, "id"=>:netflow_9_add_dst_mac_in, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_srcIsSrv_add_dst_country_code, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_translate_protocol_name, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_dstIsSrv_add_dst_city, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:ipfix_simple_mappings_add_ip_version, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_ipv4_mappings, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_normalize_bytes_from_fwd_rev_flow_delta_bytes, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:"459d3489a65841221d3e62324ace34bc4b7569e29b50426456467775d2609cd7", "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_rb_sfe_tcp_port_append_port, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:sflow_add_tos_ip_priority, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:sflow_translate_protocol_name, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_src_port_name_prepend_src_port, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_rb_cfe_tcp_port_name_unknown, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_convert_src_port, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_normalize_fortinet_appids, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:ipfix_add_dst_port_transport, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_simple_mappings, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_translate_src_port_name_dccp, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_syn_flag_dstIsSrv, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:ipfix_add_src_port_transport, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:dns_node_name, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_add_src_port_udp, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_remove_ip_protocol_version, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_remove_dst_vlan, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_add_direction_ingress, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:elastiflow_postproc_translate_dst_port_name_udp, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:netflow_9_convert_packets, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:output_elasticsearch_single, "events_in"=>0, "pipeline_ephemeral_id"=>"bfe35f62-83ee-4e85-a428-ec50a2d13986"}], "queue"=>{"type"=>"persisted", "queue_size_in_bytes"=>1, "max_queue_size_in_bytes"=>1073741824, "events_count"=>0}, "events"=>{"duration_in_millis"=>0, "out"=>0, "filtered"=>0, "queue_push_duration_in_millis"=>0, "in"=>0}, "reloads"=>{"failures"=>0, "successes"=>0}, "hash"=>"d11291d30f57296efac731a4b9b28a55e3221bac9d343ca42c6ebe6ab734c8f5"}, {"id"=>"main", "ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7", "vertices"=>[{"events_out"=>0, "queue_push_duration_in_millis"=>0, "id"=>:"12993980f5a5ea4ef0b05446e6cd0e64b7c0fefb6b696b96fb5dff556f669be6", "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"events_out"=>0, "queue_push_duration_in_millis"=>0, "id"=>:"8bd64364efbd43c8dee4811460daacfb155e54d64c1407916d9415f1d288d75d", "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"events_out"=>11626, "queue_push_duration_in_millis"=>752246, "id"=>:"1e195c60757cfadc5d16106b0c4deee8e5e9e71a740d584b7ebb615839d92f31", "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>3270268, "events_out"=>10650, "id"=>:d469e1e6c061bf6dc3f19c66179ea340ea86348a5889aa07880e1e6635f404be, "events_in"=>10654, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>28502, "events_out"=>10655, "id"=>:c21a83d828cd2243867e9e54e773f8827ee2f402d8f66db34b8df9baeeb18df3, "events_in"=>10655, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>269, "events_out"=>1596, "id"=>:"98e1ec5b6aa7e2d220fba8841ac34160b0f0d79b400296ba5c5a5bfc7eb5a534", "events_in"=>1596, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:"70c96f93d3cc14e5ec1410f8a490740dd9dffef64306d447e0f5e0cd8c76ebbc", "events_in"=>0, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>4631, "events_out"=>10650, "id"=>:bbdf404614ff7f2f03f2ce380e200b2d13661a71a4189ccc79f89a5d8323aa0d, "events_in"=>10650, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>24, "events_out"=>1596, "id"=>:"9cb8586ce704198ac7e423a11018fdc7abd69a064db2cfe5c50f82af8aebfa5b", "events_in"=>1596, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>405021, "events_out"=>10654, "id"=>:"7baec228c30d052b123ef280e5a31f1ebc86c2a772fb031afe878baf0ec767cb", "events_in"=>10655, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:"74c37f6f9adb94e96848c564dba8f02fd2ff7f9998f79e93057b1f8fed44928a", "events_in"=>0, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>20241, "events_out"=>10655, "id"=>:"18f5d8d0f6a4ef27d386937a168e17c183a3ac044ab05513827955b005c2c178", "events_in"=>10655, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>0, "events_out"=>0, "id"=>:"287e7cc90c44eed128e138700ea5b4e4cf9b892c8b949e1a9c08c334dd49980c", "events_in"=>0, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"events_out"=>10655, "long_counters"=>[{"name"=>"matches", "value"=>10655}], "id"=>:"5d4d45c5adb3bc554482420f169b96a57f6bb82adc47648143e2b0dc18a9aa39", "duration_in_millis"=>10194, "events_in"=>10655, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"duration_in_millis"=>23, "events_out"=>1596, "id"=>:"277d51f79fe673991f3fa9dcdec7ac004cca546bb4213b59783de76a934dd09c", "events_in"=>1596, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}, {"events_out"=>10375, "long_counters"=>[{"name"=>"documents.successes", "value"=>10375}, {"name"=>"bulk_requests.successes", "value"=>83}, {"name"=>"bulk_requests.responses.200", "value"=>83}], "id"=>:"06deecc47ea15551ebf41f31ca96ca8291ee02af1363bd4df73081897932f400", "duration_in_millis"=>4602, "events_in"=>10375, "pipeline_ephemeral_id"=>"8a1e7515-1988-426b-a990-bd25d8518ef7"}], "queue"=>{"type"=>"memory", "queue_size_in_bytes"=>0, "max_queue_size_in_bytes"=>0, "events_count"=>0}, "events"=>{"duration_in_millis"=>3695545, "out"=>10375, "filtered"=>10375, "queue_push_duration_in_millis"=>752246, "in"=>11626}, "reloads"=>{"failures"=>0, "successes"=>0}, "hash"=>"371a0db1aa1d9da9ae832731cf96c550cfbbde0bc221c79c79f86516dae3fa32"}], "logstash"=>{"version"=>"6.4.1", "pipeline"=>{"batch_size"=>125, "workers"=>8}, "host"=>"HowYouDoing", "ephemeral_id"=>"ebe5dda1-1939-45f1-b81f-9d17a0e1a53e", "status"=>"green", "http_address"=>"127.0.0.1:9500", "snapshot"=>false, "name"=>"HowYouDoing", "uuid"=>"9ee9fca2-d1e7-4271-83ef-877936d080b6"}, "os"=>{"cpu"=>{"load_average"=>{"5m"=>1.14, "1m"=>0.71, "15m"=>1.24}}}, "queue"=>{"events_count"=>0}, "jvm"=>{"gc"=>{"collectors"=>{"young"=>{"collection_time_in_millis"=>6247, "collection_count"=>59}, "old"=>{"collection_time_in_millis"=>190552, "collection_count"=>100}}}, "uptime_in_millis"=>795646, "mem"=>{"heap_used_percent"=>51, "heap_max_in_bytes"=>5298978816, "heap_used_in_bytes"=>2741741480}}, "events"=>{"duration_in_millis"=>3695545, "out"=>10375, "filtered"=>10375, "in"=>11626}, "reloads"=>{"failures"=>0, "successes"=>0}, "process"=>{"open_file_descriptors"=>295, "cpu"=>{"percent"=>1}, "max_file_descriptors"=>222323}, "timestamp"=>2018-09-27T21:15:07.970Z}}

Straight after that i see

[2018-09-27T23:14:29,082][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x4df2635a@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:46 sleep>"}

I have left it to run for two hours thinking it could be doing a delayed batch import.

I also see on the elasticsearch log that the template was created for: for index patterns [elastiflow-3.3.0-*]

Feel like I am almost there through this journey now just need to figure out why the data isnt landing up in an actual index.

Any ideas I would be really greatful

Ravivenkatachalam commented 5 years ago

Hi , i am also facing similar issue it seems awaiting robcowart's results. Please refer some steps in below issue : https://github.com/elastic/logstash/issues/10027

MyCodeRocks commented 5 years ago

Just checking in did this ever get investigated / resolved?

MyCodeRocks commented 5 years ago

Fresh Centos 7.x install, ELK 6.4.2 and same issue is still there.

rudyamid commented 5 years ago

I've just begun my journey to install Elastiflow. I'm using the latest Elasticsearch 6.6 (Kibana, logstash as well), and having a tough time getting logstash to come up properly and listening on UDP port 2055 for Netflow logs. The startup takes forever and I'm not sure why.

Previously, I've been using logstash's included netflow module and that worked great out of the box. I guess it just proves that any external plugins, not pre-tested/packaged by Elastic, may not work after verion(s) upgrade.

robcowart commented 5 years ago

@rudyamid, Elastic based the Logstash Netflow Module on ElastiFlow 1.0.0. The reason it loads faster is because the level of data enrichment and overall functionality is very minimal. It is also far less frequently maintained than ElastiFlow (lookup the issues it still has working with the Logstash multi pipeline features).

The number one issue that users have is giving the Logstash JVM sufficient memory, which is really an issue of not following the instructions. This requirement can be somewhat reduced if the IP reputation dictionary is not used (reducing it to a single dummy line). However, there is no getting around the fact that a lot more pipeline logic needs to be loaded by the current releases of ElastiFlow than the Logstash Netflow Module. This logic is necessary to provide all of the additional features requested by the community.