This PR updates bin/query-cloudwatch to support --sqlite as an option.
When used, it will write Cloudwatch log records to a Sqlite database (events.db by default).
A few notes:
Your query must return @timestamp and @message fields. If it does not, you will get an error
It's optimized for working with events.log, where @message is always JSON, but it will work with other logs as well
You can't use --count-distinct in combination with --format sqlite
Schema
The command will write events to a table with the following schema:
CREATE TABLE IF NOT EXISTS events (
id TEXT PRIMARY KEY NOT NULL,
timestamp TEXT NOT NULL,
name TEXT NULL,
user_id TEXT NULL,
success INTEGER NULL,
message TEXT NOT NULL,
log_stream TEXT NULL,
log TEXT NULL
)
This PR updates
bin/query-cloudwatch
to support--sqlite
as an option.When used, it will write Cloudwatch log records to a Sqlite database (
events.db
by default).A few notes:
@timestamp
and@message
fields. If it does not, you will get an errorevents.log
, where@message
is always JSON, but it will work with other logs as well--count-distinct
in combination with--format sqlite
Schema
The command will write events to a table with the following schema:
timestamp
contains an ISO-8601 timestamp (in UTC). You can use SQLite's built-in date and time functions to work with it, e.g.:For events.log,
message
contains the original JSON, and you can use SQLite's JSON functions to work with that, e.g.:name
,user_id
, andsuccess
are all automatically populated when usingevents.log
.log_stream
andlog
are optional, and will be set to@logStream
and@log
if your query includes them.Example
Grab 10 records and put them in events.db:
Wait how does sqlite work
You can open an interactive session like this:
From there:
.headers on
turns on table headers.tables
lists the available tables.mode line
to make the output easier to read in a terminal;
at the end of your query