Closed Nagabhairavanitish closed 3 years ago
my operating system is ubuntu 18.04
@Nagabhairavanitish it's hard to tell for sure from screenshots since the information is incomplete. In the future please consider filling in the issue template (the text you see when you open a new issue).
Quick guess: please try removing "http://" from all your "host" settings in the Logstash and Kibana configuration files. Just use host:port
like in the repo's default configuration.
Also, if you're trying to hardcode the IP address of the Elasticsearch container instead of using its internal DNS name (just "elasticsearch:9200") you're in for a bad surprise, because Docker assigns dynamic IPs to containers as soon as they get recreated. Prefer sticking to the default config, which works out of the box.
Hi ,
thanks for the suggestion. my trouble Is my instead of my localhost I wanna use 10.20.210.40 for connecting all my services when i use it I can see it I am connected to kibana, elastic search and 9600 port rest-Api of logstash perfectly. but I am unable to receive any logs from my python production code at all. is there a problem in my configuration or did I make a mistake.
please help me out
@Nagabhairavanitish the stack doesn't use localhost
, it uses internal names resolved by Docker / Docker Compose. It doesn't matter where your containers run (your machine, the cloud, a private server, ...), if you use elasticsearch:9200
, things will just work without requiring you to use an explicit IP like you did. When you run docker-compose up
, all services will resolve elasticsearch:9200
to the correct IP automatically. Give it a try!
Regarding your Python application, could you please describe how you are trying to send logs? What protocol, what address, what log shipper? Please provide as many details as possible, consider that I have zero knowledge of your application.
Hii i tried your method now I am getting all sorts of errors in kibana logstash. I am unable to resolve them
but here is my python file
from logstash_async.formatter import LogstashFormatter
from logstash_async.handler import AsynchronousLogstashHandler
host = '10.20.210.40'
port = 5044
class Logger:
def __init__(self, service):
self.service = service
logstash_formatter = LogstashFormatter(
extra={
'application': 'sizzle',
'custom_log_version': '0.9.9',
'environment': 'production',
'microservice': service,
})
logstash_handler = AsynchronousLogstashHandler(
host,
port,
database_path='/tmp/logstash-{}.db'.format(os.getpid())
)
logstash_handler.setFormatter(logstash_formatter)
self.logger = logging.getLogger('python-logstash-logger')
self.logger.setLevel(logging.INFO)
self.logger.addHandler(logstash_handler)
def _dynamic_logging(self, level, log_message, queue_record, extra):
if level in ['info', 'debug', 'warning', 'error', 'critical']:
extra_fields = {}
if queue_record is not None:
extra_fields['message_id'] = queue_record['_id']
if 'streamer' in queue_record:
extra_fields['streamer_name'] = queue_record['streamer']['name']
extra_fields['streamer_id'] = queue_record['streamer']['_id']
if 'twitch' in queue_record:
extra_fields['twitch_stream_id'] = queue_record['twitch']['id']
extra_fields['twitch_stream_published_at'] = queue_record['twitch']['published_at']
if 'video_file' in queue_record:
if 'duration_seconds' in queue_record['video_file']:
extra_fields['video_duration'] = queue_record['video_file']['duration_seconds']
if 'frames' in queue_record['video_file']:
extra_fields['video_number_of_frames'] = queue_record['video_file']['frames']
if extra is not None:
extra_fields.update(extra)
# print("Log Message: {}, extras: {}".format(log_message, extra_fields))
getattr(self.logger, level)(log_message, extra=extra_fields)
def info(self, log_message, queue_record=None, extra=None):
self._dynamic_logging('info', log_message=log_message, queue_record=queue_record, extra=extra)
def debug(self, log_message, queue_record=None, extra=None):
self._dynamic_logging('debug', log_message=log_message, queue_record=queue_record, extra=extra)
def warning(self, log_message, queue_record=None, extra=None):
self._dynamic_logging('warning', log_message=log_message, queue_record=queue_record, extra=extra)
def error(self, log_message, queue_record=None, extra=None):
self._dynamic_logging('error', log_message=log_message, queue_record=queue_record, extra=extra)
def critical(self, log_message, queue_record=None, extra=None):
self._dynamic_logging('critical', log_message=log_message, queue_record=queue_record, extra=extra)
# pass the name of the microservice, which will be used to filter the logs in kibana
# Some guidelines for extra fields:
# progress: for long-running tasks, log the beginning as {'progress': 'begin'} and the end as {'progress': 'end'}
# message_type: standardized short description to classify the type of log message
# Try to include both functional names and record Ids (e.g both streamer name and streamer_id) so that in the
# dashboard we can use the functional names during realtime monitoring, while we can use the actual ObjectIds to
# reliably follow the lifecycle of a specific record through the pipeline
@Nagabhairavanitish you're setting port = 5044
, which corresponds to the Beats input. However, the default transport is TCP (port 5000
in docker-elk) according to the python-logstash-async docs.
transport
... Default:logstash_async.transport.TcpTransport
Use port 5000
instead of 5044
.
Import the correct constant using:
from logstash_async.transport import BeatsTransport
then pass the correct transport to your handler:
logstash_handler = AsynchronousLogstashHandler(
host,
port,
transport=BeatsTransport,
database_path='/tmp/logstash-{}.db'.format(os.getpid())
)
for option 1 i did this to setup:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
restart: unless-stopped
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044/tcp"
- "5044:5044/udp"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
i figured it out late so I opened 5044 port tcp even in the logstash config file i modified
input {
beats {
port => 5000
}
tcp {
port => 5044
codec => json
}
}
filter
{
grok{
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level}-%{GREEDYDATA:message}"}
}
date {
match => ["timestamp", "ISO8601"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "http://10.20.210.40:9200"
user => "elastic"
password => "RDDMIl43NKJ5KUk7Jhr4"
}
stdout{codec => rubydebug}
}
here is the config file. thanks alot for helping me out but I am stuck trying to figure out a wayou from weeks
@Nagabhairavanitish you're taking a very confusing path by swapping those ports directly in Logstash. My recommendation was to adjust the port in your Python code, and keep the default port values in Logstash.
This issue is drifting away from the original topic and we are now covering how to interact with Logstash in production code, which is not a scope we can realistically offer support for in this repo, so I am going to close it to avoid going in circles.
As a final recommendation before closing:
docker-compose down -v
).docker-compose restart logstash
).If any issue arises with the stack (for issues related to Python, please consider Python-related forums), feel free to open a new issue with the requested information filled in in text format (no screenshot) and click "Preview" before submitting to ensure the formatting is correct (like I did with all the messages you posted earlier). The clearer the information, the easier we can assist you. Thank you for your understanding!
I am trying to connect to my elastic search URL when I try to run logstash.
docker-compose file
logstash file
elasticsearch file:
kibana
error:
I st my host=>10.20.210.40:9200 why does it try to connect to localhost:9200