The log parser is designed to help include log results in tests, reports and general applicative processes. It allows you to parse and analyze log files in order to extract relevant data. It can be used as is or as an SDK, where you can define your own parsing.
The basic method for using this library is, that you create a definition for your parsing. This definition allows you to parse a set of log files and extract all entries that match this pattern.
For now, we are using this library with maven, in later iteration we will publish other build system examples:
The following dependency needs to be added to your pom file:
<dependency>
<groupId>com.adobe.campaign.tests</groupId>
<artifactId>log-parser</artifactId>
<version>1.11.2</version>
</dependency>
We have two ways of running the log parser:
In order to parse logs you need to define a ParseDefinition. A ParseDefinition contains a set of ordered ParseDefinition Entries. While parsing a line of logs, the LogParser will see if all entries can be found in the line of logs. If that is the case, the line is stored according to the definitions.
Each Parse Definition consists of :
Each entry for a Parse Definition allows us to define:
When you have defined your parsing you use the LogDataFactory by passing it:
By using the StringParseFactory we get a LogData object with allows us to manage the logs data you have found.
As mentioned in the chapter Defining an Entry, each Parse Defnition Entry contains a start and end pattern. We extract and store the values between these two points, and continue with the rest of the line until there is no more data to parse.
A line is only considered if all the Parse Definition Entries can be matched in the order they have been defined.
Note: Once we have extracted the data corresponding to an entry, the following string will include the end pattern of that entry. This is because sometimes the end pattern may be part of a different data to store.
We have discovered that it would be useful to anonymize data. This will aloow you to group some log data that contains variables. Anonymization has two features:
{}
,[]
.For example if you store an anonymizer with the value:
Storing key '{}' in the system
the log-parser will merge all lines that contain the same text, but with different values for the key. For example:
will all be stored as Storing key '{}' in the system
.
Sometimes we just want to anonymize part of a line. This is useful if you want to do post-treatment. For example in our previous example as explained Storing key 'G' in the system
, would be merged, however NEO-1234 : Storing key 'G' in the system
would not be merged. In this cas we can do a partial anonymization using the []
notation. For example if we enrich our original template:
[]Storing key '{}' in the system
In this case the lines:
NEO-1234 : Storing key 'G' in the system
will be stored as NEO-1234 : Storing key '{}' in the system
NEO-1234 : Storing key 'H' in the system
will be stored as NEO-1234 : Storing key '{}' in the system
EXA-1234 : Storing key 'Z' in the system
will be stored as EXA-1234 : Storing key '{}' in the system
EXA-1234 : Storing key 'X' in the system
will be stored as EXA-1234 : Storing key '{}' in the system
Here is an example of how we can parse a string. The method is leveraged to perform the same parsing in one or many files.
@Test
public void parseAStringDemo() throws StringParseException {
String logString = "afthostXX.qa.campaign.adobe.com:443 - - [02/Apr/2022:08:08:28 +0200] \"GET /rest/head/workflow/WKF193 HTTP/1.1\" 200 ";
//Create a parse definition
ParseDefinitionEntry verbDefinition = new ParseDefinitionEntry();
verbDefinition.setTitle("verb");
verbDefinition.setStart("\"");
verbDefinition.setEnd(" /");
ParseDefinitionEntry apiDefinition = new ParseDefinitionEntry();
apiDefinition.setTitle("path");
apiDefinition.setStart(" /");
apiDefinition.setEnd(" ");
List<ParseDefinitionEntry> definitionList = Arrays.asList(verbDefinition,apiDefinition);
//Perform Parsing
Map<String, String> parseResult = StringParseFactory.parseString(logString, definitionList);
//Check Results
assertThat("We should have an entry for verb", parseResult.containsKey("verb"));
assertThat("We should have the correct value for logDate", parseResult.get("verb"), is(equalTo("GET")));
assertThat("We should have an entry for the API", parseResult.containsKey("path"));
assertThat("We should have the correct value for logDate", parseResult.get("path"),
is(equalTo("rest/head/workflow/WKF193")));
}
In the code above we want to parse the log line below, and want to find the REST call "GET /rest/head/workflow/WKF193", and to extract the verb "GET", and the api "/rest/head/workflow/WKF193".
afthostXX.qa.campaign.adobe.com:443 - - [02/Apr/2022:08:08:28 +0200] "GET /rest/head/workflow/WKF193 HTTP/1.1" 200
The code starts with the creation a parse definition with at least two parse definitions that tell us between which markers should each data be extracted. The parse difinition is then handed to the StringParseFactory so that the data can be extracted. At the end we can see that each data is stored in a map with the parse definition entry title as a key.
You can import or store a Parse Definition to or from a JSON file.
You can define a Parse Definition in a JSON file.
This can then be imported and used for parsing using the method ParseDefinitionFactory.importParseDefinition
. Here is small example of how the JSON would look like:
{
"title": "Anonymization",
"storeFileName": false,
"storeFilePath": false,
"storePathFrom": "",
"keyPadding": "#",
"keyOrder": [],
"definitionEntries": [
{
"title": "path",
"start": "HTTP/1.1|",
"end": "|Content-Length",
"caseSensitive": false,
"trimQuotes": false,
"toPreserve": true,
"anonymizers": [
"X-Security-Token:{}|SOAPAction:[]"
]
}
]
}
By default, the Log-Parser will generate a standardized key-value extraction of the log you generate. All values are then stored as Strings. For more advanced transformations we suggest you write your own Log SDK. We will describe each in detail in this chapter.
By default, each entry for your lag parsing will be stored as a Generic entry. This means that all values will be stored as Strings. Each entry will have a :
Using the log parser as an SDK allow you to define your own transformations and also to override many of the behaviors. By fefault we can look at the SDK mode as a second parsing, where we first parse the logs using the generic ParseDefinitions, and then a second treatment is performed with the SDK you write.
Typical use cases are:
In order to use this feature you need to define a class that extends the class StdLogEntry.
You will often want to transform the parsed information into a more manageable object by defining your own fields in the SDK class.
In the project we have two examples of SDKs (under `src/test/java``:
com.adobe.campaign.tests.logparser.data.SDKCaseSTD
where we perform additional parsing of the log data.com.adobe.campaign.tests.logparser.data.SDKCase2
where we transform the time into a date object.You will need to declare a default constructor and a copy constructor. The copy constructor will allow you to copy the values from one object to another.
You will need to declare how the parsed variables are transformed into your SDL. This is done in the method setValuesFromMap()
.
In there you can define a fine-grained extraction of the variables. This could be extracting hidden data in strings of the extracted data, or simple data transformations such as integer or dates.
You will need to define how a unique line will look like. Although this is already done in the Definition Rules, you may want to provide more precisions. This is doen in the method makeKey()
.
Depending on the fields you have defined, you will want to define how the results are represented when they are stored in your system.
You will need to give names to the headers, and provide a map that extracts the values.
One of the added values of writing your own log data is the possibility of using non-String objects, and perform additional operations on the data. This has the drawback that we can have odd behaviors when exporting the logs data. For this we, y default, transforml all data in an entry to a map of Strings.
In some cases the default String transformation may not be to your liking. In this case you will have to override the method Map<String, String> fetchValueMapPrintable()
. To do this the method needs to call perform your own transformation to the results of the fetchValueMap()
method.
Below is a diagram representing the class structure:
We have a series of search and organizing the log data. These by general use Hamcrest Matchers to allow you to define different querires.
We have introduced the filter and search mechanisms. These allow you to search the LogData for values for a given ParseDefinitionEntry. For this we have introduced the following methods:
We currently have the following signatures:
public boolean isEntryPresent(String in_parseDefinitionName, String in_searchValue)
public boolean isEntryPresent(Map<String, Matcher> in_searchKeyValues)
public LogData<T> searchEntries(String in_parseDefinitionName, String in_searchValue)
public LogData<T> searchEntries(Map<String, Matcher> in_searchKeyValues)
public LogData<T> filterBy(Map<String, Matcher> in_filterKeyValues)
When we define a search term, we do this by defining it as a map of ParseDefining Entry Name and a Matcher. The Matcher we use is a Hamcrest matcher which provides great flexibility in defining the search terms.
Map<String, Matcher> l_filterProperties = new HashMap<>();
l_filterProperties.put("Definition 1", Matchers.equalTo("14"));
l_filterProperties.put("Definition 2", Matchers.startsWith("13"));
LogData<GenericEntry> l_foundEntries = l_logData.searchEntries(l_filterProperties));
In versions prior to 1.11.0 we used a map of key and Objects for search terms. In these queries it was implicitly an equality check. Because of that these search terms can be replaced with Matchers.equalTo
or Matchers.is
.
Example of a search term in version 1.10.0:
Map<String, Object> l_filterProperties = new HashMap<>();
l_filterProperties.put("Definition 1", "14");
In version 1.11.0 the same search term would look like this:
Map<String, Matcher> l_filterProperties = new HashMap<>();
l_filterProperties.put("Definition 1", Matchers.equalTo("14"));
We have the capability to enrich log data with additional information. This is done by using the method LogData#enrichData(Map<String, Matcher>, String, String)
. This method accepts:
If you want to add multiple values for the enrichment, you can run this method several times, or using an other method which is more
suitable which is the method LogData#enrichData(Map<String, Matcher>, Map<String, String> keyValueToEnrich)
.This method accepts:
We have also introduced a method called LogData#enrichEmpty(String, String)
, which sets a value for the entries which have not yet have a value set for them.
We have introduced the groupBy mechanism. This functionality allows you to organize your results with more detail. Given a log data object, and an array of ParseDefinitionEntry names, we generate a new LogData Object containing groups made by the passed ParseDeinitionEnries and and number of entries for each group.
Let's take the following case:
Definition 1 | Definition 2 | Definition 3 | Definition 4 |
---|---|---|---|
12 | 14 | 13 | AA |
112 | 114 | 113 | AAA |
120 | 14 | 13 | AA |
If we perform groupBy with the parseDefinition Definition 2
, we will be getting a new LogData object with two entries:
Definition 2 | Frequence |
---|---|
14 | 2 |
114 | 1 |
We can also pass a list of group by items, or even perform a chaining of the group by predicates.
We can create a sub group of the LogData by creating group by function:
LogData<GenericEntry> l_myGroupedData = logData.groupBy(Arrays.asList("Definition 1", "Definition 4"));
//or
LogData<MyImplementationOfStdLogEntry> l_myGroupedData = logData.groupBy(Arrays.asList("Definition 1", "Definition 4"), MyImplementationOfStdLogEntry.class);
In this case we get :
Definition 1 | Definition 4 | Frequence |
---|---|---|
12 | AA | 1 |
112 | AAA | 1 |
120 | AA | 1 |
The GroupBy can also be chained. Example:
LogData<GenericEntry> l_myGroupedData = logData.groupBy(Arrays.asList("Definition 1", "Definition 4")).groupBy("Definition 4");
In this case we get :
Definition 4 | Frequence |
---|---|
AA | 2 |
AAA | 1 |
As of version 1.11.0 we have introduced the possibility to compare two LogData objects. This is a light compare that checks that for a given key, if it is absent, added or changes in frequency. The method compare
returns a LogDataComparison
object that contains the results of the comparison. A comparison can be of three types:
Apart from this we return the :
These values are negative if the values have decreased.
Creating a differentiation report is done with the method LogData.compare(LogData<T> in_logData)
. This method returns a LogDataComparison
object that contains the results of the comparison.
We can generate an HTML Report where the differences are high-lighted. This is done with the method LogDataFactory.generateComparisonReport(LogData reference, LogData target, String filename)
. This method will generate an HTML Report detailing the found differences.
As of version 1.0.5 we have introduced the notion of assertions. Assertions can either take a LogData object or a set of files as input.
We currently have the following assertions:
AssertLogData.assertLogContains(LogData<T> in_logData, String in_entryTitle, Matcher in_expectedCondition)
AssertLogData.assertLogContains(String description, LogData<T> in_logData, String in_entryTitle, Matcher in_expectedCondition)
AssertLogData.assertLogContains(LogData<T> in_logData, Map<String, Matcher> in_expectedConditions)
AssertLogData.assertLogContains(String description, LogData<T> in_logData, Map<String, Matcher> in_expectedConditions)
You have two types of assertions. A simple one where you give an entry key and a matcher, and a more complex one where you give a map of parse Definition Entry keys entries and corresponding matchers.
An assertion will only work if:
Otherwise, you will get a failed assertion for these causes.
We have the possibility to export the log data results into files. Currently the following formats are supported:
All reports are stored in the directory log-parser-reports/export/
.
If you are using an SDK to control the log parsing, you may want to override the method fetchValueMapPrintable
to provide a more suitable export of the data. For mor information on this please refer to the chapter describing this topic.
We have the possibility to export the log data results into a CSV file. This is done by calling the methods LogData#exportLogDataToCSV
.
You have the possibility to define the data, and order to be exported as well as the file name.
We have the possibility to export the log data results into an HTML file. This is done by calling the methods LogData#exportLogDataToHTML
.
You have the possibility to define the data, and order to be exported, the file name and the title of the report.
We have the possibility to export the log data results into an JSON file. This is done by calling the methods LogData#exportLogDataToJSON
.
You have the possibility to define the data, and order to be exported, the file name and the title of the report.
As of version 1.11.0 we have introduced the possibility of running the log-parser from the command line. This is done by using the executable jar file or executing the main method in maven.
The results will currently be stored as a CSV or HTML file.
The command line requires you to at least provide the following information:
--startDir
: The root path from which the logs should be searched.--parseDefinition
: The path to the parse definition file.The typical command line would look like this:
mvn exec:java -Dexec.args="--startDir=src/test/resources/nestedDirs/ --parseDefinition=src/test/resources/parseDefinition.json"
or
java -jar log-parser-1.11.0.jar --startDir=/path/to/logs --parseDefinition=/path/to/parseDefinition.json
You can provide additional information such as:
--fileFilter
: The wildcard used for selecting the log files. The default value is *.log--reportType
: The format of the report. The allowed values are currently HTML, JSON & CSV. The default value is HTML--reportFileName
: The name of the report file. By default, this is the name of the Parse Definition name suffixed with '-export'--reportName
: The report title as show in an HTML report. By default, the title includes the Parse Definition nameYou can get a print out of the command line options by running the command with the --help
flag.
All reports are stored in the directory log-parser-reports/export/
.