Closed ghost closed 7 years ago
the originating address is OA:10.65.226.142 which is not allowed.
It was a typo in the mentioned IP we included "10.65.226.142" in the configuration file and even with that we are getting the same error. Please find the configuration below,
# from these IP addresses, accept any method, any URI, any HTTP body
- name: Admin access to internal server hosts
type: allow
hosts: [127.0.0.1, 10.65.226.142]
The log is from a configuration that is not the one reported: the history field does not show "Accept requests from users in group radmin" as a rule. And not even "Admin access to internal server hosts" is in the history.
I think you should delete the logs and repeat the test.
Please find the error logs in logstash we are getting while pushing data,
`[ERROR] 2017-11-14 12:01:59.268 [[main]-pipeline-manager] elasticsearch - Failed to install template. {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://es_cluster_dnsname:9200/_template/ec2_utilizationv2'", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:269:in
perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:257:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:347:in
with_connection'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:256:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:264:in
head'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:330:in template_exists?'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:78:in
template_install'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:29:in install'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:9:in
install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:58:in install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:25:in
register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in
register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in
register_plugins'", "org/jruby/RubyArray.java:1613:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in
register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in
run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in start_pipeline'"]}
Hi,
do you use indice template in your ES ?
anyway, here I tested with 1.16.14.pre1 and es 2.4.x , these cases :
A . using Es template and logstash output like this (preventing logstash to manage template) :
output {
if [type] == "telephony" {
stdout{ codec => rubydebug}
elasticsearch {
hosts => ["https://10.11.12.13:9200"]
flush_size => "500"
ssl => true
user => "logstash_BE_TEST"
password => "logstash_BE_TEST"
cacert => "D:/ELKG_TEST/logstash_BE/certs/elkg-rootca_cert.pem"
manage_template => "false"
index => "log_lu_ei_tel_%{type}-%{+YYYY.MM.dd}"
workers => 4
}
}
}
and
B. Not using ES template and letting logstash pushing directly that it wants to es
output {
if [type] == "telephony" {
stdout{ codec => rubydebug}
elasticsearch {
hosts => ["https://10.11.12.13:9200"]
flush_size => "500"
ssl => true
user => "logstash_BE_TEST"
password => "logstash_BE_TEST"
cacert => "D:/ELKG_TEST/logstash_BE/certs/elkg-rootca_cert.pem"
# manage_template => "false"
index => "pizza-%{+YYYY.MM.dd}"
workers => 4
}
}
}
both work.
here the block rule I use for logstash :
- name: "Logstash can write and create its own indices"
type: allow
auth_key: logstash_BE_TEST:logstash_BE_TEST
actions: ["cluster:monitor/main", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
indices: ["logstash-*", "log*"]
I hope it helps you to fix on your side, I would recommend you to adjust your RoR block.
Kr
fred
Thanks. I have tried the configuration in a standalone ES 5.6 node (1 node) and it works.
- name: "Logstash can write and create its own indices"
type: allow
actions: ["cluster:monitor/main", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
I tried the same in my ES 5.6 Cluster (3 datanodes under an AWS ELB). But it is not creating index. I tried to push data directly to a datanode using logstash and it also failed to create index. There is no error in the logs. If I create an Index manually and run logstash, it will push the data to that index. I am using Logstash 5.6.
output {
elasticsearch { hosts => ["loadbalancer_dns:9090"] template_name => ec2_utilizationv2 index => "ec2_utilization829" document_id => "%{Instance_Id}-%{Datetime-CPU-Utilization}"}
stdout { codec => rubydebug }
}
elasticsearch logs
[2017-11-15T10:03:41,712][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Accept requests from users in group r_readonly_es_admin', policy: ALLOW} req={ ID:1771218530-1606479297#27721, TYP:BulkRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:data/write/bulk, OA:10.65.226.155, IDX:ec2_utilization829, MET:POST, PTH:/_bulk, CNT:<OMITTED, LENGTH=121407>, HDR:Accept-Encoding,Connection,Content-Length,Content-Type,Host,User-Agent, HIS:[Accept requests from users in group r_readonly_es_admin->[hosts->true, methods->true]], [full access to internal servers->[hosts->false]] }
Logstash logs
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[DEBUG] 2017-11-15 10:00:46.517 [LogStash::Runner] DateFilter - Date filter with format=yyyy-MM-dd'T'HH:mm:ss.SSSz, locale=null, timezone=null built as org.logstash.filters.parser.JodaParser
mmmh
is the same Ror configuration deployed on all nodes which are behind loadbalancer_dns ? and be sure each have been restarted.
one thing you can try, is to replace directly your loadbalancer_dns:9090 by each node, to see which one refuses .
In fact I tell you that be cause I also works with a cluster of 3 nodes without issue.
Ld
Hi @sscarduzio / @ld57
Thanks for the update, we tried with the entire cluster with the configuration which you have mentioned in the method A (using Es template and logstash output like this (preventing logstash to manage template) ) This even gave us the positive result.
I would like to brief the infra detail what we are making use of here.
ES version: 5.6 Authentication plugin: ReadOnly rest Mechanism to push data: Logstash (logstash-5.6.0-1.noarch) Data which we are pushing to ES: CSV files Data node structure: 3 nodes behind an ELB
Logstash Configuration which we are using is,
input {
file {
path => "/tmp/utilization.csv"
start_position => beginning
type => "utilization"
sincedb_path => "/dev/null"
ignore_older => 7776000
}
}
filter {
if [type] == "utilization" {
csv{
columns => ["Instance_Id","Name-Tag","Ip-Address","VPC-Id","Instance-Type","Application-Tag","Environment-Tag","Role-Tag","Stack-Tag","AWS-Account Name","AWS-Account-Number","Avg-CPU-Utilization","Max-CPU-Utilization","Min-CPU-Utilization","Datetime-CPU-Utilization","Avg-NetworkIn","Max-NetworkIn","Min-NetworkIn","Sum-NetworkIn","Datetime-NetworkIn","Avg-NetworkOut","Max-NetworkOut","Min-NetworkOut","Sum-NetworkOut","Datetime-NetworkOut","Avg-DiskReadBytes","Max-DiskReadBytes","Min-DiskReadBytes","Sum-DiskReadBytes","Datetime-DiskRead","Avg-DiskWriteBytes","Max-DiskWriteBytes","Min-DiskWriteBytes","Sum-DiskWriteBytes","Datetime-DiskWrite"]
separator => ","
remove_field => ["message","Datetime-DiskRead","Datetime-DiskWrite","Datetime-NetworkIn","Datetime-NetworkOut"]
}
mutate{
convert => {
"Avg-CPU-Utilization" => "float"
"Max-CPU-Utilization" => "float"
"Min-CPU-Utilization" => "float"
"Avg-NetworkIn" => "float"
"Max-NetworkIn" => "integer"
"Min-NetworkIn" => "integer"
"Sum-NetworkIn" => "integer"
"Avg-NetworkOut" => "float"
"Max-NetworkOut" => "integer"
"Min-NetworkOut" => "integer"
"Sum-NetworkOut" => "integer"
"Avg-DiskReadBytes" => "float"
"Max-DiskReadBytes" => "integer"
"Min-DiskReadBytes" => "integer"
"Sum-DiskReadBytes" => "integer"
"Avg-DiskWriteBytes" => "float"
"Max-DiskWriteBytes" => "integer"
"Min-DiskWriteBytes" => "integer"
"Sum-DiskWriteBytes" => "integer"
}
}
}
}
output {
elasticsearch { hosts => ["http://ec2.es.cluster] template_name => utilization index => "utilization12" manage_template => "false" document_id => "%{Instance_Id}-%{Datetime-CPU-Utilization}"}
stdout { codec => rubydebug }
}
How we are trying to execute the logstash with this configuration
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/new.conf > /tmp/logstash.log &
Log what we are getting while running this command Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console After that we can see the data as getting pushed to Es (but it we cannot see the index there)
Template which we are using on ES
{
"utilization": {
"order": 0,
"template": "utilization*",
"settings": {
"index": {
"number_of_shards": "5"
}
},
"mappings": {
"utilization": {
"properties": {
"Avg-NetworkIn": {
"type": "double"
},
"Avg-CPU-Utilization": {
"type": "double"
},
"Datetime-CPU-Utilization": {
"format": "YYYY-MM-dd HH:mm:ss",
"type": "date"
},
"Max-CPU-Utilization": {
"type": "double"
},
"Ip-Address": {
"type": "string"
},
"Stack-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Environment-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Min-DiskWriteBytes": {
"type": "long"
},
"type": {
"type": "string"
},
"Avg-DiskWriteBytes": {
"type": "double"
},
"path": {
"type": "string"
},
"AWS-Account Name": {
"index": "not_analyzed",
"type": "string"
},
"Min-NetworkOut": {
"type": "long"
},
"@version": {
"type": "string"
},
"host": {
"type": "string"
},
"Instance_Id": {
"index": "not_analyzed",
"type": "string"
},
"Min-DiskReadBytes": {
"type": "long"
},
"Instance-Type": {
"index": "not_analyzed",
"type": "string"
},
"Max-DiskWriteBytes": {
"type": "long"
},
"Min-NetworkIn": {
"type": "long"
},
"Avg-DiskReadBytes": {
"type": "double"
},
"Max-DiskReadBytes": {
"type": "long"
},
"Name-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Role-Tag": {
"index": "not_analyzed",
"type": "string"
},
"AWS-Account-Number": {
"type": "string"
},
"Avg-NetworkOut": {
"type": "double"
},
"Sum-DiskReadBytes": {
"type": "long"
},
"Sum-NetworkIn": {
"type": "long"
},
"Max-NetworkOut": {
"type": "long"
},
"Max-NetworkIn": {
"type": "long"
},
"@timestamp": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
},
"Sum-DiskWriteBytes": {
"type": "long"
},
"Sum-NetworkOut": {
"type": "long"
},
"Application-Tag": {
"index": "not_analyzed",
"type": "string"
},
"VPC-Id": {
"index": "not_analyzed",
"type": "string"
},
"Min-CPU-Utilization": {
"type": "double"
}
}
},
"type1": {
"_source": {
"enabled": false
},
"properties": {
"AWS-Account Name": {
"index": "not_analyzed",
"type": "string"
},
"Application-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Name-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Datetime-CPU-Utilization": {
"format": "YYYY-MM-dd HH:mm:ss",
"type": "date"
},
"Role-Tag": {
"index": "not_analyzed",
"type": "string"
},
"VPC-Id": {
"index": "not_analyzed",
"type": "string"
},
"Instance_Id": {
"index": "not_analyzed",
"type": "string"
},
"Instance-Type": {
"index": "not_analyzed",
"type": "string"
},
"Stack-Tag": {
"index": "not_analyzed",
"type": "string"
},
"Environment-Tag": {
"index": "not_analyzed",
"type": "string"
}
}
}
},
"aliases": {}
}
}
ReadOnly Rest Configuration
- name: "Logstash can write and create its own indices"
type: allow
actions: ["cluster:monitor/main", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
indices: ["utilization*", "log*"]
Response what we getting in data node while pushing the data from logstash
[2017-11-15T16:55:20,651][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Accept write and read requests from the below given ip address ranges', policy: ALLOW} req={ ID:1031023039-1597452516#42843, TYP:ClusterHealthRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:cluster:monitor/health, OA:10.65.226.149, IDX:, MET:GET, PTH:/_cluster/health, CNT:<N/A>, HDR:Accept,Accept-Encoding,Connection,content-length,Host,User-Agent, HIS:[Accept write and read requests from the below given ip address ranges->[hosts->true, methods->true]], [Admin access to internal server hosts->[hosts->false]] }
CSV Sample
Instance_Id,Name-Tag,Ip-Address,VPC-Id,Instance-Type,Application-Tag,Environment-Tag,Role-Tag,Stack-Tag,AWS-Account Name,AWS-Account-Number,Avg-CPU-Utilization,Max-CPU-Utilization,Min-CPU-Utilization,Datetime-CPU-Utilization,Avg-NetworkIn,Max-NetworkIn,Min-NetworkIn,Sum-NetworkIn,Datetime-NetworkIn,Avg-NetworkOut,Max-NetworkOut,Min-NetworkOut,Sum-NetworkOut,Datetime-NetworkOut,Avg-DiskReadBytes,Max-DiskReadBytes,Min-DiskReadBytes,Sum-DiskReadBytes,Datetime-DiskRead,Avg-DiskWriteBytes,Max-DiskWriteBytes,Min-DiskWriteBytes,Sum-DiskWriteBytes,Datetime-DiskWrite
i-09736551b22819822,test,10.75.226.15,vpc-b4b169d0,t2.micro,Cloud Security,Untagged,Active logs,AD,logs,123455667543,5.79583333333,14.83,1.5,2017-11-08 23:00:00,10439.4166667,84124.0,2616.0,626365.0,2017-11-08 23:00:00,9651.81666667,32466.0,4757.0,579109.0,2017-11-08 23:00:00,0.0,0.0,0.0,0.0,2017-11-08 23:00:00,0.0,0.0,0.0,0.0,2017-11-08 23:00:00,
i-09736551b22819822,test,10.75.226.15,vpc-b4b169d0,t2.micro,Cloud Security,Untagged,Active logs,AD,logs,123455667543,5.82116666667,12.83,0.67,2017-11-09 00:00:00,8180.16666667,26623.0,2618.0,490810.0,2017-11-09 00:00:00,9014.7,18480.0,4757.0,540882.0,2017-11-09 00:00:00,0.0,0.0,0.0,0.0,2017-11-09 00:00:00,0.0,0.0,0.0,0.0,2017-11-09 00:00:00,
Please let us know how can we create index using logstash if not existing with a template mentioned below and parse data to ES.
Hi @johnakash
In your elasticsearch log I see HIS:[Accept write and read requests from the below given ip address ranges->[hosts->true, methods->true]], [Admin access to internal server hosts->[hosts->false]] }
This, let me see that you have some other block rules in readonlyrest config. If a block rule matches, then authorisation stop being evaluated. In your config, i see that you have a matching block rule named Accept write and read requests from the below given ip address ranges
Order in block rule is important, from top to bottom is the evaluation. Could you place the logstash Block rule before all others block rules and retry the test?
Also regarding your logstash : i will give you a hint tomorrow for logs. Without your logstash log, you cannot know what happens ( for example, your fields settings in logstash may differ from your indice configuration template in elasticsearch) I ran in trouble in the past with things like this.
Also, tomorrow i will give you a template "open bar" to put in elasticsearch, to be sure to avoid field setting conflict.
Of course, first you can just move the logstash rule on top in readonlyrest config, and it may just fix entirely your issue.
Ld
@ld57 ,
Thanks for the response. Please find all the blocks of the rules which we have configured below,
readonlyrest: enable: true response_if_req_forbidden: Sorry, your request is forbidden
access_control_rules:
# from these IP addresses, accept any method, any URI, any HTTP body
- name: Admin access to internal server hosts
type: allow
hosts: [127.0.0.1]
- name: Accept write and read requests from the below given ip address ranges
type: allow
hosts: [10.65.226.0/24, 10.75.226.0/24, 10.65.228.0/24, 10.75.228.0/24]
methods: [OPTIONS,GET,PUT,POST]
- name: Readonly Access to below given IP ranges
type: allow
hosts: [10.0.0.0/8]
methods: [GET]
- name: "Logstash can write and create its own indices"
type: allow
actions: ["cluster:*", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
- name: "Logstash can write and create its own indices"
type: allow
actions: ["cluster:monitor/main", "indices:data/read/","indices:data/write/","indices:admin/template/","indices:admin/create","indices:admin/types/exists"]
indices: ["utilization", "log*"]
Could you please let me know in which order we need to configure these blocks?
Our expectation
Could you please re-order the above mentioned blocks for achieving this?
@ld57 / @sscarduzio Just for your information we removed all other blocks except logstash and tried to push data on that time we got below given log and even with that indices are not getting created
[
2017-11-15T19: 48: 48,
642
][
INFO
][
t.b.r.a.ACL
]ALLOWEDby{
name: 'Logstash can write and create its own indices',
policy: ALLOW
}req={
ID: 1431264518-1504161272#1171,
TYP: BulkRequest,
CGR: N/A,
USR: [
nobasicauthheader
],
BRS: true,
ACT: indices: data/write/bulk,
OA: 10.65.226.149,
IDX: ec2_utilization-final,
MET: POST,
PTH: /_bulk,
CNT: <OMITTED,
LENGTH=150913>,
HDR: Accept-Encoding,
Connection,
Content-Length,
Content-Type,
Host,
User-Agent,
HIS: [
Logstashcanwriteandcreateitsownindices->[
indices->true,
actions->true
]
]
}
ReadOnly Rest configuration on /etc/elasticsearch/elasticsearch.yml
- name: "Logstash can write and create its own indices"
type: allow
actions: ["cluster:monitor/main", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
indices: ["ec2_utilization*", "log*"]
We added one more block and tested it again, we are getting the same response ReadOnly rest config
- name: "Logstash can write and create its own indices"
type: allow
hosts: [0.0.0.0/0]
actions: ["cluster:monitor/main", "indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create","indices:admin/types/exists"]
indices: ["ec2_utilization*", "log*"]
- name: Accept write and read requests from the below given ip address ranges
type: allow
hosts: [10.65.226.0/24, 10.75.226.0/24, 10.65.228.0/24, 10.75.228.0/24]
methods: [OPTIONS,GET,PUT,POST]
Index is not getting created and ElasticSearch log
[
2017-11-15T20: 24: 20,
255
][
INFO
][
t.b.r.a.ACL
]ALLOWEDby{
name: 'Logstash can write and create its own indices',
policy: ALLOW
}req={
ID: 1068108424-180190166#1109,
TYP: BulkRequest,
CGR: N/A,
USR: [
nobasicauthheader
],
BRS: true,
ACT: indices: data/write/bulk,
OA: 10.65.226.149,
IDX: ec2_utilization933,
MET: POST,
PTH: /_bulk,
CNT: <OMITTED,
LENGTH=146489>,
HDR: Accept-Encoding,
Connection,
Content-Length,
Content-Type,
Host,
User-Agent,
HIS: [
Logstashcanwriteandcreateitsownindices->[
indices->true,
hosts->true,
actions->true
]
]
}
Hi,
ok , first let s see what logstash tell us, about his tries to create indice in ES :
modify your logstash output as :
output {
stdout{ codec => rubydebug}
elasticsearch {
hosts => ["http://ec2.es.cluster:9090"] #fix the good hosts and port according to your config
#template_name => utilization
index => "utilization12-%{+YYYY.MM.dd}"
#manage_template => "false"
#document_id => "%{Instance_Id}-%{Datetime-CPU-Utilization}"
}
file { path => "choose_your_path/logstash_failed_events-%{+YYYY-MM-dd}" }
}
also, when you will try, it would be interesting to see the logs of elasticsearch (not only the RoR one)
Hi ld57,
I have made the changes and but still have problem, logstash_failed_events logs
{"Avg-NetworkIn":0.0,"Avg-CPU-Utilization":0.0,"Max-CPU-Utilization":0.0,"Datetime-CPU-Utilization":"Datetime-CPU-Utilization","Ip-Address":"Ip-Address","Stack-Tag":"Stack-Tag","type":"ec2_utilization","Environment-Tag":"Environment-Tag","Min-DiskWriteBytes":0,"path":"/tmp/ec2_utilizationv2_2017-11-09-003800.csv","Avg-DiskWriteBytes":0.0,"AWS-Account Name":"AWS-Account Name","Min-NetworkOut":0,"@version":"1","host":"prd-mgt-elk-log-w2c-b","Instance_Id":"Instance_Id","Min-DiskReadBytes":0,"Instance-Type":"Instance-Type","Max-DiskWriteBytes":0,"Min-NetworkIn":0,"Avg-DiskReadBytes":0.0,"Name-Tag":"Name-Tag","Max-DiskReadBytes":0,"Role-Tag":"Role-Tag","AWS-Account-Number":"AWS-Account-Number","Avg-NetworkOut":0.0,"Sum-NetworkIn":0,"Sum-DiskReadBytes":0,"Max-NetworkOut":0,"tags":["_dateparsefailure"],"Max-NetworkIn":0,"@timestamp":"2017-11-16T10:00:53.447Z","Sum-NetworkOut":0,"Sum-DiskWriteBytes":0,"Application-Tag":"Application-Tag","VPC-Id":"VPC-Id","Min-CPU-Utilization":0.0}
{"Avg-NetworkIn":10439.4166667,"Avg-CPU-Utilization":5.79583333333,"Max-CPU-Utilization":14.83,"Datetime-CPU-Utilization":"2017-11-08 23:00:00","Ip-Address":"10.75.226.15","Stack-Tag":"AD","type":"ec2_utilization","Environment-Tag":"Untagged","Min-DiskWriteBytes":0,"path":"/tmp/ec2_utilizationv2_2017-11-09-003800.csv","Avg-DiskWriteBytes":0.0,"AWS-Account Name":"tmosecurity","Min-NetworkOut":4757,"@version":"1","host":"prd-mgt-elk-log-w2c-b","Instance_Id":"i-09736551b22819822","Min-DiskReadBytes":0,"Instance-Type":"t2.micro","Max-DiskWriteBytes":0,"Min-NetworkIn":2616,"Avg-DiskReadBytes":0.0,"Name-Tag":"COMADSTASK004","Max-DiskReadBytes":0,"Role-Tag":"Active Directory","AWS-Account-Number":"667978626758","Avg-NetworkOut":9651.81666667,"Sum-NetworkIn":626365,"Sum-DiskReadBytes":0,"Max-NetworkOut":32466,"column36":null,"tags":["_dateparsefailure"],"Max-NetworkIn":84124,"@timestamp":"2017-11-16T10:00:53.449Z","Sum-NetworkOut":579109,"Sum-DiskWriteBytes":0,"Application-Tag":"Cloud Security","VPC-Id":"vpc-b4b169d0","Min-CPU-Utilization":1.5}
Elasticsearch logs
[2017-11-16T05:01:04,204][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Logstash can write and create its own indices', policy: ALLOW} req={ ID:430680879-1766402792#29348, TYP:BulkRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:data/write/bulk, OA:10.65.226.155, IDX:utilization12-2017.11.16, MET:POST, PTH:/_bulk, CNT:<OMITTED, LENGTH=146214>, HDR:Accept-Encoding,Connection,Content-Length,Content-Type,Host,User-Agent, HIS:[Logstash can write and create its own indices->[]] }
[2017-11-16T05:01:04,462][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Logstash can write and create its own indices', policy: ALLOW} req={ ID:1117230624-1646914288#29349, TYP:BulkRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:data/write/bulk, OA:10.65.226.155, IDX:utilization12-2017.11.16, MET:POST, PTH:/_bulk, CNT:<OMITTED, LENGTH=17649>, HDR:Accept-Encoding,Connection,Content-Length,Content-Type,Host,User-Agent, HIS:[Logstash can write and create its own indices->[]] }
[2017-11-16T05:01:04,471][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Logstash can write and create its own indices', policy: ALLOW} req={ ID:690042087-82978137#29350, TYP:BulkRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:data/write/bulk, OA:10.65.226.155, IDX:utilization12-2017.11.16, MET:POST, PTH:/_bulk, CNT:<OMITTED, LENGTH=145871>, HDR:Accept-Encoding,Connection,Content-Length,Content-Type,Host,User-Agent, HIS:[Logstash can write and create its own indices->[]] }
[2017-11-16T05:01:04,574][INFO ][t.b.r.a.ACL ] ALLOWED by { name: 'Logstash can write and create its own indices', policy: ALLOW} req={ ID:955606089-758666979#29351, TYP:BulkRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:data/write/bulk, OA:10.65.226.155, IDX:utilization12-2017.11.16, MET:POST, PTH:/_bulk, CNT:<OMITTED, LENGTH=6960>, HDR:Accept-Encoding,Connection,Content-Length,Content-Type,Host,User-Agent, HIS:[Logstash can write and create its own indices->[]] }
@ld57 / @sscarduzio / @johnakash The issue is resolved. In elasticsearch configuration file "action.auto_create_index" was false. I have changed that to true. Now I am able to create index via logstash.
ah good news !
on another hand , I would expected to see in logstash log file something like elasticsxearch rejecting indice creation or something like this ..
mmhh need to go further in my logs approach.
@ld57 / @sscarduzio / @johnakash
Thanks for the help. I am closing this case.
I am not able to create index from a remote system (192.168.1.142) via logstash using "csv filter plugin". Iam getting the error,
FORBIDDEN by default req={ ID:1454358066-1489483709#141, TYP:GetIndexTemplatesRequest, CGR:N/A, USR:[no basic auth header], BRS:true, ACT:indices:admin/template/get, OA:10.65.226.142, IDX:<N/A>, MET:HEAD, PTH:/_template/ec2_utilizationv2, CNT:<N/A>, HDR:Accept-Encoding,Connection,content-length,Content-Type,Host,User-Agent, HIS:[Accept requests from users in group support_group->[hosts->true, methods->false]], [Accept requests from users in group readonly_group->[methods->false, hosts->true]], [full access to internal servers->[hosts->false]] }