:warning: This project is no longer actively maintained!
Logquacious (lq) is a fast and simple log viewer built by Cash App.
It currently only supports exploration of logs stored in Elasticsearch, however the storage/indexing backend is pluggable. If you are interested in contributing more backends, open a pull request!
Putting application and system logs in an Elasticsearch index is a common way to store logs from multiple sources in a single place that can be searched. However, while there are many web-based user interfaces for Elasticsearch, most of them either focus on read/write access, treating Elasticsearch as a general purpose database, or are Elasticsearch query builders. We didn't find any modern, well-designed, minimalist web user interfaces designed with the explicit purpose of read-only log exploration.
The local demo runs a basic web server which serves Logquacious. It also runs an instance of Elasticsearch with a script to generate demo log entries.
You'll need docker and docker-compose installed, then run:
cd demo
docker-compose up
Wait a while, then visit http://localhost:8080/ in your browser.
You should be presented with the Logquacious UI and a few logs that are continuously generated in the background.
You will need Docker installed.
You can configure the image in multiple ways:
You can configure the instance via command line arguments or environment variables (e.g. ES_URL
):
# docker run logquacious --help
Usage: lq-startup
Flags:
--help Show context-sensitive help.
--es-proxy Use a reverse proxy for Elasticsearch to avoid
needing CORS. (ES_PROXY)
--es-url=STRING Elasticsearch host to send queries to, e.g.:
http://my-es-server:9200/ (ES_URL)
--es-index="*" Elasticsearch index to search in. (ES_INDEX)
--timestamp-field="@timestamp"
The field containing the main timestamp entry.
(TIMESTAMP_FIELD)
--level-field="level" The field containing the log level. (LEVEL_FIELD)
--service-field="service" The field containing the name of the service.
(SERVICE_FIELD)
--message-field="message" The field containing the main message of the log
entry. (MESSAGE_FIELD)
--ignored-fields=_id,_index,...
Do not display these fields in the collapsed log
line. (IGNORED_FIELDS)
For example run the following for this configuration:
192.168.0.1
logs-
text
9999
docker run -p 0.0.0.0:9999:8080 squareup/logquacious \
--es-url="http://192.168.0.1:9200" \
--es-index="logs-*" \
--message-field="text"
Typical output:
2020/01/13 21:39:32 Variables for this docker image looks like this:
{ESProxy:true ESURL:http://192.168.0.1:9200 ESIndex:logs-* TimestampField:@timestamp LevelField:level ServiceField:service MessageField:text IgnoredFields:[_id _index] IgnoredFieldsJoined:}
2020/01/13 21:39:32 Successfully generated/etc/nginx/conf.d/lq.conf
2020/01/13 21:39:32 Successfully generated/lq/config.json
2020/01/13 21:39:32 Running nginx...
http://localhost:9999/ should work in this example.
If you have your own config.json
, you can simply mount it at /lq/config.json
.
docker run -p 0.0.0.0:9999:8080 -v `pwd`/custom-config.json:/lq/config.json squareup/logquacious
You can also mount your own nginx configuration at /etc/nginx/conf.d/lq.conf
. By default it is generated for you based on command line arguments.
git clone https://github.com/cashapp/logquacious
cd logquacious
npm install
npm run build
npm run build
will generate a dist
directory containing all the files needed for a web server, including an index.html
file.Configure Logquacious in config.json
.
Setting up a web server if you don't already have one:
curl https://getcaddy.com | bash -s personal
Caddyfile
to listen on port 8080 with http, also to talk to your Elasticsearch server::8080
proxy /es my-elastic-search-hostname:9200 {
without /es
}
caddy
in the same directory as the Caddyfile
http://localhost:8080/
. The Elasticsearch endpoint should be working at http://localhost:8080/es/
.The development workflow is very similar to the "From Source" set up above. You can run a self reloading development server instead of npm run build
.
You can either set up CORS on Elasticsearch or reverse proxy both the hot server and Elasticsearch. To do this, create Caddyfile
in the root of the project:
:8080
# Redirect all /es requests to the Elasticsearch server
proxy /es my-elastic-search-hostname:9200 {
without /es
}
# Redirect all other requests to parcel's development server.
proxy / localhost:1234
To run the parcel development server:
npm run hot
Run caddy
. You should be able to hit http://localhost:8080/ and when you make any code changes the page should refresh.
There are tests which are executed with npm test
.
The top level structure of the json configuration is as follows:
{
"dataSources": [],
"fields": {
"name-of-field-configuration": []
},
"filters": []
}
Contains the URL, index, etc for querying Elasticsearch. An example:
"dataSources": [
{
"id": "elasticsearch-server",
"type": "elasticsearch",
"index": "{{.ESIndex}}",
"urlPrefix": "{{if .ESProxy}}/es{{else}}{{.ESURL}}{{end}}",
"fields": "main",
"terms": "-service:lq-nginx"
}
]
id
is a reference that can be used to create a data source filter. (See below). If you only have one data source, you don't need to create a data source filter.
type
must be elasticsearch
until more data sources are implemented.
index
is the Elasticsearch index to search in. You can use an asterisk as a wildcard. This corresponds to the URL in a query request, e.g. http://es:9200/index/_search
urlPrefix
is the URL to connect to your Elasticsearch server, without a trailing slash. This will resolve to urlPrefix/index/_search
.
fields
is a reference to the key of the fields
in the top level of the json configuration.
terms
is a string containing Elasticsearch terms that will always be added to the user's terms. Useful to hide logs of queries to Logquacious.
Configures how log entries are shown in the UI. You're able to transform, add classes, ignore fields, etc.
Here is an example:
"fields": {
"main": {
"timestamp": "@timestamp",
"collapsedFormatting": [
{
"field": "@timestamp",
"transforms": [
"timestamp"
]
},
{
"field": "message",
"transforms": [
{
"addClass": "strong"
}
]
}
],
"collapsedIgnore": ["_id", "_index"]
}
}
This configuration will do the following:
main
which is the fields
reference used in dataSources
.@timestamp
field at the start of each line and format it.message
field afterwards and make it stand out._id
and _index
.If you want to see an example of many transforms check out the example config.
There is a menu drop down that is enabled when you use filters. It is between the search button and the time drop down.
You are able to customise it to have values you can filter on, e.g.:
"filters": [
{
"id": "region",
"urlKey": "r",
"title": "Region",
"default": "ap-southeast-2",
"type": "singleValue",
"items": [
{
"title": "All Regions",
"id": null
},
{
"title": "Sydney",
"id": "ap-southeast-2"
},
{
"title": "London",
"id": "eu-west-2"
}
]
}
]
This singleValue
filter allows you filter log entries based on region
equalling ap-southeast-2
for example. This is identical to searching for region:ap-southeast-2
in the search field.
The urlKey
is what is used in the URL for this filter. For example the URL might look like: http://localhost:8080/?q=my+search&r=ap-southeast-2
title
is shown as the the name of the field/value in the search drop down menu.
The null
value signifies that the filter was not selected, so it does not filter on that key in that case.
Another type of filter is a dataSource
filter for when you have multiple Elasticsearch instances.
The id
of each item must point to the id
of a data source.
You can see an example of this in the example config under the env
filter.
If you want to be able to communicate to Elasticsearch on a different host and port to Logquacious, you will need to configure Elasticsearch to respond with the correct CORS headers.
For example, you are running https://lq.mycompany.com/ which serves the static content. You will need to set these configuration options in Elasticsearch:
http.cors.enabled: true
http.cors.allow-origin: "https://lq.mycompany.com/"
See the Elasticsearch documentation on the http configuration options for more information.
Copyright 2019 Square, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.