elastic / logstash

Logstash - transport and process your logs, events, or other data
https://www.elastic.co/products/logstash
Other
14.17k stars 3.49k forks source link

[indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist] #5517

Open realrobmorris opened 8 years ago

realrobmorris commented 8 years ago

Putting this here because there's been no replies in the forums.

First time user of ELK, originally I created an issue against elasticsearch on github, but it was suggested that I bring this issue to this forum, and so here we are.

In my /var/log/elasticsearch/logstashTesting.log file, all I have are entries that begin with this. [indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist]

Elasticsearch (ELK) version:

[root@logstash ~]# yum list installed | grep -E '(elasticsearch|logstash|kibana)'
elasticsearch.noarch   2.3.3-1          @elasticsearch-2.x                      
kibana.x86_64          4.5.1-1          @kibana-4.5                             
logstash.noarch        1:2.3.2-1        @logstash-2.3   

JVM version:

[root@logstash ~]# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
[root@logstash ~]# 

OS version:

[root@logstash ~]# cat /etc/redhat-release 
CentOS release 6.7 (Final)
[root@logstash ~]# 

Provide logs (if relevant):

RemoteTransportException[[logstash][<ip_redacted>:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-06-13 10:51:40,146][DEBUG][action.fieldstats        ] [logstash] [.kibana][0], node[o3rmPA87QB2R7bDSvUD9Fw], [P], v[4], s[STARTED], a[id=9YqIdu6LQguolACS17Bo1g]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@4b00875d]
RemoteTransportException[[logstash][<ip_redacted>:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-06-13 10:52:17,348][DEBUG][action.fieldstats        ] [logstash] [.kibana][0], node[o3rmPA87QB2R7bDSvUD9Fw], [P], v[4], s[STARTED], a[id=9YqIdu6LQguolACS17Bo1g]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@616f19c]
RemoteTransportException[[logstash][<ip_redacted>:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

logstash config file

input {
  beats {
    port => 5044
  }
}

filter {
  date {
    locale => "en"
    match => ["mytimestamp", "YYYY-MM-dd HH:mm:ss"]
    target => "@timestamp"
  }
  grok {
    match => [ "message", "%{GREEDYDATA:message}"]
  }
}

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => "<ip_redacted>:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  if "OMG" in [message] {
   email {
    from => "logstash@<myhost>.com"
    subject => "logstash alert"
    to => "<my_user@<myhost>.com"
    via => "sendmail"
    body => "Here is the event line that occured: %{message}"
   }
 }
}

elasticsearch config file

[root@logstash ~]# cat /etc/elasticsearch/elasticsearch.yml 
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: logstashTesting
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/data/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: <ip_redacted>
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

kibana config file

[root@logstash ~]# cat /opt/kibana/config/kibana.yml 
# Kibana is served by a back end server. This controls which port to use.
# server.port: 5601

# The host to bind the server to.
# server.host: "0.0.0.0"

# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""

# The maximum payload size in bytes on incoming server requests.
# server.maxPayloadBytes: 1048576

# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://<ip_redacted>:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
# elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"

# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key

# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
# elasticsearch.ssl.verify: true

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 30000

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000

# Set the path to where you would like the process id file to be created.
# pid.file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# logging.dest: stdout

# Set this to true to suppress all logging output.
# logging.silent: false

# Set this to true to suppress all logging output except for error messages.
# logging.quiet: false

# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: false
[root@logstash ~]# 

Question: So what's going on here? (also, logstash isn't sending the email when the match is found)

Also, FYI.

This am I removed the manage_template => false from the logstash config file, but I'm still getting the same error. Here's what elasticsearch/logstashTesting.log says.

[root@logstash log]# tail -25 elasticsearch/logstashTesting.log
RemoteTransportException[[logstash][10.240.91.231:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-06-14 09:40:35,308][DEBUG][action.fieldstats        ] [logstash] [.kibana][0], node[bRLLyJytS2K0jzf1n0aV9g], [P], v[6], s[STARTED], a[id=32gvMwnNS8iatdjuBGERtg]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@6ff8f4b9]
RemoteTransportException[[logstash][10.240.91.231:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[root@logstash log]# 
realrobmorris commented 8 years ago

changed up the logstash.conf file.

input {
  beats {
    port => 5044
  }
}

filter {
  grok {
    match => [ "message", "%{GREEDYDATA:message}"]
  }
}

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => "<ip>:9200"
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  if "OMG" in [message] {
   email {
    from => "logstash@<host>"
    subject => "logstash alert"
    to => "<user>@<host>"
    via => "sendmail"
    body => "Here is the event line that occured: %{message}"
   }
 }
}

by the way, this is what the logstash.stdout says:

{
       "message" => [
        [0] "Jun 17 14:07:58 <nodehost_redacted> root: This is another message ERROR WARN OMG",
        [1] "Jun 17 14:07:58 <nodehost_redacted> root: This is another message ERROR WARN OMG"
    ],
      "@version" => "1",
    "@timestamp" => "2016-06-17T18:08:03.667Z",
          "type" => "log",
          "beat" => {
        "hostname" => "<host_redacted>",
            "name" => "<host_redacted>"
    },
        "source" => "/var/log/messages",
        "offset" => 56471433,
        "fields" => nil,
    "input_type" => "log",
         "count" => 1,
          "host" => "<host_redacted>",
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ]
}

So there seems to be an @timestamp in the logstash.stdout ; so why would the elasticsearch/logstashTesting.log file have the error:

[indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
jamesl1234 commented 8 years ago

I am getting this as well! Started last week and takes ages to search for data.. even opening the default site and viewing all logs with default '*' (star)...

By looking at this /var/log/elasticsearch/cluster01.log I can see Marvel is having an issue?

[2016-07-02 12:24:03,545][DEBUG][action.fieldstats ] [kib02] [.marvel-es-1-2016.07.02][0], node[0FBqNX3jQbaHx0S9Ko0sMQ], [P], v[3], s[STARTED], a[id=Q7p1dpI4Qcip2Yc2SjhYGA]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@6965d5ba] RemoteTransportException[[els04][192.168.10.24:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist]; Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166) at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:300) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

realrobmorris commented 8 years ago

have you figured this out?

jamesl1234 commented 8 years ago

Hi DeadPirateRob. Data that I had in my ES was not important so I deleted everything with this command: curl -XDELETE 'http://localhost:9200/_all' I also posted my problem here.. https://discuss.elastic.co/t/default-and-marvel-index-are-throwing-java-errors-about-timestamp-not-existing/54591/1

So no not really fixed but at least I have a workaround (be it a hack) :)

realrobmorris commented 8 years ago

ok, fyi I changed up my logstash file, it now looks like this:

input {
  beats {
    port => 5044
    type => "syslog"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  stdout { codec => rubydebug }
  elasticsearch {
    hosts => "<ip_redacted>:9200"
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  if "OMG" in [message] {
   email {
    to => "user_email@host.com"
    from => "logstash.alert@host.com"
    via => "sendmail"
    subject => "ELK Syslog Alert concerning %{host}"
    body => "This syslog has an interesting string and thus picked up on host (%{host}). Via this message: %{message}"
   }
 }
}

I haven't seen the error happen again, and the email sending is working correctly.

realrobmorris commented 8 years ago

as an update, its still working well.