When Logstash start with --config.reload.automatic, the file input can ingest all data without any reload
However, if pipeline got reload in the middle of ingestion, let's say have already read 300 out of 600 lines, Logstash read the first 300 lines again and leave the rest unread.
Version: 4.2.4
LS Version: 7.12
Operating System: macOS
Config File (if you have sensitive info, please remove it):
When Logstash start with
--config.reload.automatic
, the file input can ingest all data without any reload However, if pipeline got reload in the middle of ingestion, let's say have already read 300 out of 600 lines, Logstash read the first 300 lines again and leave the rest unread.pipeline.id: SDH_650 pipeline.workers: 1 pipeline.batch.size: 5 config.string: | input { file { path => "/650/merged.csv" mode => "read" start_position => "beginning" } }
filter { csv { separator => "," columns => ["id", "host", "fqdn", "IP", "mac", "role", "type", "make", "model", "oid", "fid", "time"] remove_field => ["path", "host", "message", "@version" ]
} }
output { elasticsearch { index => "650" } stdout { codec => rubydebug } }
7.12
with auto-reload> bin/logstash --config.reload.automatic
pipeline.workers
from 1 to 2 during ingestionpipeline.workers
multiple times during ingestionCurrently the workaround is use
tail
mode