BroadSoft-Xtended / BroadWorks-Dashboards-and-Discovery

This repository contains the BroadWorks Dashboards and Discovery components to extend BroadWorks data mining, reporting, and analysis capabilities.
23 stars 4 forks source link

bwlogreceiver issues #80

Closed bfortune2 closed 5 years ago

bfortune2 commented 5 years ago

I'm struggling to get my bwlogsender and bwlogreceiver talking for some reason and I don't see exactly what the issue is. I've configured the bwlogreceiver on the same server and ES. I've been able to manually load XSLog files by utilizing the bwlogfileprocessor. However, when I attempt to turn on the bwlogsender on an AS and send directly to the 9072 port on the elastic search server with bwlogreceiver running I get an error on the collector side that says: 2019-08-21_07:50:11.224 [Sender Thread #0] INFO c.broadsoft.zipsender.NetworkSocket - Connection attempt to Receiver failed (ConnectionException) 2019-08-21_07:50:16.224 [Sender Thread #0] INFO c.broadsoft.zipsender.NetworkSocket - Attempting to create socket to Receiver

I can run TCPDump on the ES server and I see the AS getting to the ES with the 9072 port. bwlogreceiver appears to be running and I'm not seeing any issues: elastic@ess1 bwlogreceiver]$ ./bwlogreceiver.pl --showrun

Log Receiver is running as pid 14827

Here's the logreceiver props file: [elastic@ess1 bwlogreceiver]$ cat logreceiver.props

This should be the hostname or IP Address of the ElasticSearch Server

elasticserver:127.0.0.1

This is the port number of the Rest Interface on the ElasticSearch Server:

Usually the value is between 9200 and 9205

elasticport:9200

This is the ElasticSearch Clustername

This should match the configuration of the ElasticSearch Server

in file config/elasticsearch.yml element "cluster.name"

elasticclustername:bwc4

This is the Port that the Log Senders send Logs to:

receiverport:9072

Indicates whether the Sender and Receiver socket is using SSL:

senderreceiverusessl:false

This is the size of the queue for which the log processor threads will

pull from. Not recommended to change unless you understand the JVM memory

pressure that will come to bear.

logprocessorqueuesize:200

This is the number of prcessor threads that is parsing and individually indexing

the logs into ElasticSearch.

Should roughly be the number of CPU instances of the hardware.

logprocessornumthreads:8

This is size of the JVM Heap Size. Not recommended to change unless instructed

by BroadSoft.

jvmheapsize:1024m

Path to the bin directory of the java installation.

If you are getting this prompt, this script was unable to locate

java in the standard locations

Should be something like /usr/java/bin

JAVA_PATH:/usr/local/java/java_base/bin

If using ElasticSearch authentication, this is the user to use.

Leave default (NOAUTH) if not utilizing authentication

ESAUTHUSER:NOAUTH

If using ElasticSearch authentication, this is the password to use.

Leave default (NOAUTH) if not utilizing authentication

ESAUTHPASS:NOAUTH

The filesystem path the trustore, which holds trusted CA certs.

Truststore must be a JKS or PKCS12 file.

truststorepath:elastic

The password for verifying the truststore content.

truststorepassword:elastic

This should be the hostname or IP Address of the Kafka Server

kafkaserver:None

This is the port number of the Kafka Server:

kafkaserverport:9092

This is the topic name to subscribe for the logs:

kafkastopicname:None

This is the group name to subscribe for the logs:

kafkasgroupname:None

This indicates whether to use Kafka (true or false):

usekafka:false [elastic@ess1 bwlogreceiver]$

I'm not sure what's going on. It has to be something simple, but I've been looking at this I can't figure it out. Would welcome any assistance I can get. :-)

dstewart-broadsoft commented 5 years ago

I would check iptables on the ES host - most of the distributions are securing the interfaces by default these days... Check "iptables --list" as root.

You can kill the iptables configuration by doing a "iptables --flush" to verify.

bfortune2 commented 5 years ago

Thank you Dave. That was it!