robcowart / elastiflow

Network flow analytics (Netflow, sFlow and IPFIX) with the Elastic Stack
Other
2.48k stars 592 forks source link

Unable to import elastiflow.kibana.7.3.x.ndjson into Kibana 7.3.0 #402

Closed maf23 closed 4 years ago

maf23 commented 4 years ago

I was trying to import elastiflow.kibana.7.3.x.ndjson into a new Kibana system running 7.3.0. But all I got was: Sorry, there was an error The file could not be processed.

I was however successful in uploading elastiflow.kibana.7.0.x.json.

robcowart commented 4 years ago

Was there anything interesting in the Kibana or Elasticsearch logs?

maf23 commented 4 years ago

No, nothing at all.

On Fri, Aug 23, 2019 at 11:54 AM Rob Cowart notifications@github.com wrote:

Was there anything interesting in the Kibana or Elasticsearch logs?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/402?email_source=notifications&email_token=AAIQO5M2CCWJEIAAEXZGTDTQF6XVXA5CNFSM4IO53SLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD47XQKI#issuecomment-524253225, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQO5KFK73OMSCIRRH5BULQF6XVXANCNFSM4IO53SLA .

--

Martin Forssén

Director of Information Security

Recorded Future https://www.recordedfuture.com/

+46 760 252357

maf@recordedfuture.com

townium commented 4 years ago

I've just stood up a clean build of 7.3.0 and get the same error - like Maf said - elastiflow.kibana.7.0.x.json works for me as well.

I tried to validate the JSON of elastiflow.kibana.7.3.x.ndjson and I got errors (via https://jsonlint.com/) though I'm unsure if its supporting the ndjson format (I really don't know the difference ha!)

robcowart commented 4 years ago

A few questions for the two of you...

Elastic Stack OSS or X-Pack/Elastic License?

Cluster or single node?

Security configured? If so which roles does the user have?

What resources are available CPU, RAM, etc. ?

Rob

mcore commented 4 years ago

I have the same error on non oss Kibana. Single node, Default setup on docker. 7.0 imports no problem. Server is not doing anything as logstash is turned off.

maf23 commented 4 years ago

OSS, I had one es node and one kibana node 9separate machines). No security and does not seem to be resource constrained.

On Fri, Aug 23, 2019 at 4:06 PM MontiCore notifications@github.com wrote:

I have the same error on non oss Kibana. Single node, Default setup on docker. 7.0 imports no problem. Server is not doing anything as logstash is turned off.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/402?email_source=notifications&email_token=AAIQO5KKQTZIVAZOOWJZR5DQF7VEVA5CNFSM4IO53SLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5AJ5AI#issuecomment-524328577, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQO5OI2ZC3RMR5OC2LZX3QF7VEVANCNFSM4IO53SLA .

--

Martin Forssén

Director of Information Security

Recorded Future https://www.recordedfuture.com/

+46 760 252357

maf@recordedfuture.com

robcowart commented 4 years ago

Do either of you have Elasticsearch or Kibana behind a reverse proxy? I am wondering if limits on on upload size are causing an issue. For example...

By default, Nginx has a limit of 1MB on file uploads. To set file upload size, you can use the client_max_body_size directive, which is part of Nginx’s ngx_http_core_module module. This directive can be set in the http, server or location context.

mcore commented 4 years ago

I do have Nginx as reverse proxy. Both files (7.0 and 7.3) are exactly the same size though.

maf23 commented 4 years ago

I also have nginx in front of it and will try to increase the size, but as MontiCore said, they are the same size.

On Fri, Aug 23, 2019 at 4:47 PM MontiCore notifications@github.com wrote:

I do have Nginx as reverse proxy. Both files (7.0 and 7.3) are exactly the same size though.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/402?email_source=notifications&email_token=AAIQO5KY5FIWZCJJ4HKANL3QF7Z67A5CNFSM4IO53SLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5ANR6A#issuecomment-524343544, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQO5LJ4UKF5KPKHT3I5CTQF7Z67ANCNFSM4IO53SLA .

--

Martin Forssén

Director of Information Security

Recorded Future https://www.recordedfuture.com/

+46 760 252357

maf@recordedfuture.com

robcowart commented 4 years ago

Can you guys test going straight to Kibana? I'm just trying to narrow things down at this point.

maf23 commented 4 years ago

Changing the client_max_body_size solved the issue for me.

On Fri, Aug 23, 2019 at 4:49 PM Rob Cowart notifications@github.com wrote:

Can you guys test going straight to Kibana? I'm just trying to narrow things down at this point.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/402?email_source=notifications&email_token=AAIQO5OMHIIZAAI2GGU3NWLQF72I5A5CNFSM4IO53SLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5ANZMI#issuecomment-524344497, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQO5JXQQLJ6BKC35K66NDQF72I5ANCNFSM4IO53SLA .

--

Martin Forssén

Director of Information Security

Recorded Future https://www.recordedfuture.com/

+46 760 252357

maf@recordedfuture.com

mcore commented 4 years ago

What size did you set for it?

maf23 commented 4 years ago

2m, and I did see an error in the nginx log that the body was too large before I changed it.

On Fri, Aug 23, 2019 at 4:58 PM MontiCore notifications@github.com wrote:

What size did you set for it?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/402?email_source=notifications&email_token=AAIQO5ILLRMKNO6DDNOVQLDQF73HTA5CNFSM4IO53SLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5AORRY#issuecomment-524347591, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIQO5JNWGANIDMR4ZCGDU3QF73HTANCNFSM4IO53SLA .

--

Martin Forssén

Director of Information Security

Recorded Future https://www.recordedfuture.com/

+46 760 252357

maf@recordedfuture.com

mcore commented 4 years ago

I am still getting same error. Here is the nginx log: 2019/08/23 10:02:38 [error] 29812#29812: *240 upstream timed out (110: Connection timed out) while reading response header from upstream, client: x.x.x.x, server: server.domain.com, request: "POST /api/saved_objects/_import?overwrite=true HTTP/1.1", upstream: "http://127.0.0.1:5601/api/saved_objects/_import?overwrite=true", host: "server.domain.com", referrer: "https://server.domain.com/app/kibana" I am trying to override 7.0 objects though.

robcowart commented 4 years ago

The 7.0 JSON file loads A LOT slower than the newer NDJSON in 7.3. By default nginx will timeout the session after 60s. Try increasing this value to something like 5 mins.

More info here... http://www.doublecloud.org/2014/03/nginx-how-to-fix-timeout-issues-and-more/

mcore commented 4 years ago

Ok, I connected directly to Kibana without Nginx, deleted all saved objects, and still getting: Sorry, there was an error The file could not be processed. No errors in the docker logs.

robcowart commented 4 years ago

Just to clarify... you tested elastiflow.kibana.7.3.x.ndjson, correct?

mcore commented 4 years ago

yes!

robcowart commented 4 years ago

@mcore you mentioned "default setup on docker". Did you use the docker-compose.yml that I provide in the ElastiFlow repository, or did you start Kibana some other way?

There is a Kibana setting server.maxPayloadBytes, which is similar to nginx's client_max_body_size. The default value for this setting is 1048576 bytes.

In the docker-compose.yml that I provide I set this to 4194304 bytes with the environment variable SERVER_MAXPAYLOADBYTES: 4194304.

Can you confirm whether you have set this value to a higher value, or whether it is the default. If it isn't set, please set it as I have above, and retest. Thanks.

mcore commented 4 years ago

I am looking at the docker-compose.yml in your repo, I dont see SERVER_MAXPAYLOADBYTES there... Yes i just git pulled docker-compose.yml from your repo and used it (with small modifications, like memory size, storage directory)

robcowart commented 4 years ago

Ahhh... yeah. Just saw that now. Sorry... I have a bunch of different deployment configs I use. I think I went too simple there. Give me a few minutes.

robcowart commented 4 years ago

Please give take a look at this one... https://github.com/robcowart/elastiflow/blob/master/docker-compose.yml

I have added SERVER_MAXPAYLOADBYTES and changed the logging back to the standard settings, which are more verbose.

mcore commented 4 years ago

Same error, in the log:

{
  "type":"error",
  "@timestamp":"2019-08-23T16:44:42Z",
  "tags":["warning","process"],
  "pid":1,
  "level":"error",
  "error":{
    "message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 158)",
    "name":"UnhandledPromiseRejectionWarning",
    "stack":"SyntaxError: Unexpected token < in JSON at position 2
        at JSON.parse (<anonymous>)
        at ndJsonStream.pipe.pipe.str (/usr/share/kibana/src/legacy/server/saved_objects/lib/create_saved_objects_stream_from_ndjson.js:31:19)
        at Transform.transform [as _transform] (/usr/share/kibana/src/legacy/utils/streams/map_stream.js:35:26)
        at Transform._read (_stream_transform.js:190:10)
        at Transform._write (_stream_transform.js:178:12)
        at doWrite (_stream_writable.js:410:12)
        at clearBuffer (_stream_writable.js:540:7)
        at onwrite (_stream_writable.js:465:7)
        at Transform.afterTransform (_stream_transform.js:94:3)
        at Transform.transform [as _transform] (/usr/share/kibana/src/legacy/utils/streams/map_stream.js:36:9)
        at process._tickCallback (internal/process/next_tick.js:68:7)"
  },
  "message":"Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 158)"
}
robcowart commented 4 years ago

Can you send me your exact docker-compose.yml? I am struggling with what to try next since I can't reproduce the issue myself.

mcore commented 4 years ago

sure

version: '3'

services:
  elastiflow-elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
    container_name: elastiflow-elasticsearch
    restart: 'no'
    ulimits:
      memlock:
        soft: -1
        hard: -1
    network_mode: host
    volumes:
      - /opt/netflow/elastiflow_es:/usr/share/elasticsearch/data
    environment:
      # JVM Heap size
      #   - this should be at least 2GB for simple testing, receiving only a few flows per second.
      #   - for production environments upto 31GB is recommended.
      ES_JAVA_OPTS: '-Xms24g -Xmx24g'

      cluster.name: ES-NETFLOW-PRIMARY

      bootstrap.memory_lock: 'true'

      network.host: 127.0.0.1
      http.port: 9200
      discovery.type: 'single-node'

      indices.query.bool.max_clause_count: 8192
      search.max_buckets: 100000

      action.destructive_requires_name: 'true'

  elastiflow-kibana:
    image: docker.elastic.co/kibana/kibana:7.3.0
    container_name: elastiflow-kibana
    restart: 'no'
    depends_on:
      - elastiflow-elasticsearch
    network_mode: host
    environment:
      SERVER_HOST: 0.0.0.0
      SERVER_PORT: 5601
      SERVER_MAXPAYLOADBYTES: 4194304

      ELASTICSEARCH_HOSTS: "http://127.0.0.1:9200"

      KIBANA_DEFAULTAPPID: "dashboard/653cf1e0-2fd2-11e7-99ed-49759aed30f5"

      LOGGING_DEST: stdout
      LOGGING_QUIET: 'false'

  elastiflow-logstash-oss:
    image: robcowart/elastiflow-logstash-oss:3.5.1
    container_name: elastiflow-logstash-oss
    restart: 'no'
    depends_on:
      - elastiflow-elasticsearch
    network_mode: host
    environment:
      # JVM Heap size - this MUST be at least 3GB (4GB preferred)
      LS_JAVA_OPTS: '-Xms6g -Xmx6g'

      # ElastiFlow global configuration
      ELASTIFLOW_DEFAULT_APPID_SRCTYPE: "cisco_nbar2"

      # Name resolution option
      ELASTIFLOW_RESOLVE_IP2HOST: "true"
      ELASTIFLOW_NAMESERVER: "8.8.8.8"

      ELASTIFLOW_NETFLOW_IPV4_PORT: 2055
      ELASTIFLOW_SFLOW_IPV4_PORT: 6343
      ELASTIFLOW_IPFIX_TCP_IPV4_PORT: 4739
godsik commented 4 years ago

clean install Ubuntu 18.04.3, Kibana system running 7.3.1 all updates. without nginx VM 8cpu/16gb ram/40gb hdd (raid ssd) copy 3.5.1

/etc/elasticsearch/jvm.options /etc/logstash/jvm.options -Xms4g -Xmx4g

/etc/kibana/kibana.yml server.maxPayloadBytes: 4194304

When import dashboard: "Sorry, there was an error"

alfredosola commented 4 years ago

FWIW: I have landed here looking for this exact error. Cause was we have nginx as a reverse proxy in front of Kibana. Solution was to add this to the nginx configuration: client_max_body_size 10M;

robcowart commented 4 years ago

Thanks @alfredosola