Azure / azure-diagnostics-tools

Plugins and tools for collecting, processing, managing, and visualizing diagnostics data and configuration
98 stars 94 forks source link

azurewadtable - not getting any data pulled out #112

Open Joffinn opened 6 years ago

Joffinn commented 6 years ago

Hi,

We have our logs in azure cloud, so far we were using linqpad to read them. I'm setting up ELK so we could analyse them in a more effective way.

following all instruction for https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azurewadtable I'm still unable to get any data.

I changed the collection start time for a date in past, still nothing more than what is below.

c:\ELK\logstash-5.6.3\bin>logstash -f logstash-wad.conf
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:108 warning: already initialized constant DEFAULT_MAX_POOL_SIZE
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:110 warning: already initialized constant DEFAULT_REQUEST_TIMEOUT
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:111 warning: already initialized constant DEFAULT_SOCKET_TIMEOUT
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:112 warning: already initialized constant DEFAULT_CONNECT_TIMEOUT
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:113 warning: already initialized constant DEFAULT_MAX_REDIRECTS
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:114 warning: already initialized constant DEFAULT_EXPECT_CONTINUE
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:115 warning: already initialized constant DEFAULT_STALE_CHECK
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:590 warning: already initialized constant ISO_8859_1
c:/ELK/logstash-5.6.3/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/client.rb:641 warning: already initialized constant KEY_EXTRACTION_REGEXP
Sending Logstash's logs to c:/ELK/logstash-5.6.3/logs which is now configured via log4j2.properties
[2017-10-18T10:23:24,946][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"c:/ELK/logstash-5.6.3/modules/fb_apache/configuration"}
[2017-10-18T10:23:24,950][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"c:/ELK/logstash-5.6.3/modules/netflow/configuration"}
[2017-10-18T10:23:26,107][INFO ][logstash.inputs.azurewadtable] Using version 0.9.x input plugin 'azurewadtable'. This plugin should work but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.
[2017-10-18T10:23:26,149][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-10-18T10:24:41,778][INFO ][logstash.pipeline        ] Pipeline main started
[2017-10-18T10:24:41,982][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Am I totally missing something here?

Joffinn commented 6 years ago

FYI my config file looks like this:

input {
  azurewadtable {
    account_name => "STORAGE ACCOUNT NAME"
    access_key => "STORAGE ACCESS KEY"
    table_name => "TABLE NAME"
  }
}
output {
  elasticsearch {
    hosts => "[10.0.2.100:9200]"
    index => "wad"

  }
}

my guess is that I did something wrong with output but I don't see what.

xiaomi7732 commented 6 years ago

@Joffinn , Firstly, please try the simplest debug output like this to see if you can see the output:

output {
    stdout { 
        codec => rubydebug
    }
}

If yes, then, try elasticsearch output plugin with default index:

output {
  elasticsearch {
    hosts => "[10.0.2.100:9200]"
  }
}

If you do want to customize index, give it a hard-coded value will not take you anywhere. Please refer the document here to setup the index parameter: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-index.

xiaomi7732 commented 6 years ago

@Joffinn, It has been a while, did you get any chance to try the bare-bone configuration out?

Joffinn commented 6 years ago

@xiaomi7732 a colleague of mine found the trick to make it run. We had to comment the azurewadtable.rb file line 81 such as ` #for i in 0..99

query_filter << " or (PartitionKey gt '#{i.to_s.rjust(19, '0')}_#{partitionkey_from_datetime(@last_timestamp)}' and PartitionKey lt '#{i.tos.rjust(19, '0')}#{partitionkey_from_datetime(@until_timestamp)}')"

#end # for block`

And suddenly it worked like a charm

She found this from some other thread but we still didn't understand why we needed to do so in order to have the plugin running properly.

xiaomi7732 commented 6 years ago

@Joffinn, Really appreciate your input! @clguimanMSFT, Could you please follow up a bit on this issue?

clguiman commented 6 years ago

@Joffinn The commented out lines just extend the query to include more types of data. Sometimes the partition key can also include _timestamp. Is it possible to share some of the data in the tables? The PartitionKey column is enough. You can use Storage Explorer (https://azure.microsoft.com/en-us/features/storage-explorer/) to get this data

Joffinn commented 6 years ago

@clguimanMSFT atm I only have access to these tables through LinqPad But PartitionKey column looks like this Hope it helps

`

0636441762000000000

0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000

0636441762000000000

0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441762000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441768000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000 0636441774000000000

`

clguiman commented 6 years ago

@Joffinn Can you please set the log level to 'debug' (https://www.elastic.co/guide/en/logstash/current/logging.html) and capture the traces emmited by the azurewadtable plugin? It should at least include the full query, any exceptions and anything before "[filter_duplicates] ... new item"

clguiman commented 6 years ago

@Joffinn you accidentally leaked your storage key, you should reset it :) The logs provided only show the initialization part of the plugin, there are no logs with the actual query running. You should see something like "Query filter: "