PostgreSQL component for the elastic.io platform, supports AWS Redshift
This is an open source component for working with PostgreSQL object-relational database management system on elastic.io platform it also works well with AWS Redshift.
With this component you will have following trigger:
Following acitons are inside:
PostgreSQL Component Completeness Matrix
No required Environment Variables
There are two options for authentication:
postgresql://user:password@your.postgresql.host:5432/dbname
Note: if you fill out both the Connection String and all the other connection data fields, the platform will use the connection string to connect to the database.
The option Allow self-signed certificates
add to connection options following:
ssl: {
rejectUnauthorized: false
}
It could be useful for instances that are is using self-signed SSL certificates (like Heroku)
See more in documentation.
This action & trigger are actually the same but can be used in two different scenarios - trigger as a first step and action in between other steps.
Following configuration options are available:
For example you have an SQL query that returns you 400 rows, if Bundle results in batches is enabled you'll get a single message with array of 400 elements in it:
{
"values" : [
{"id": 1...},
{"id": 2...}
...
]
}
and if no records were found you'll get a message with an empty array in it. This is sometimes usefull, especially when working with request-response kind of tasks.
If Bundle results in batches is disabled (and that's so by default) then you will get a message per resulting row, so in example above you'll get 400 messages. If query returned no data then no messages will be sent.
SELECT Action & Trigger does not support transactions.
This action is useful if you want to insert, update or delete some data, returned value is ignored, number of affected rows you can see in the log file.
Following configuration options are available:
Action does not support transactions.
This action is useful to execute a bulk insert query in one transaction. An incoming message needs to contain a body with an array of objects.
You need to put the name of the table into field Table Name where you want to put multiple values.
You need to determine the name of the columns in which corresponding values will be inserted.
Values - needs to contain an array of objects, each object needs to contain values that will be inserted in corresponding columns.
For example, you need to execute following query:
INSERT INTO itemstable(id, text) VALUES (1, 'First item'), (2, 'Second item')
You need specify field Table Name = 'itemstable', Columns = 'id, text' and Values needs to be:
[
{
id: 1,
text: 'First item'
},
{
id: 2,
text: 'Second item'
}
]
All changes will rollback, if something wrong with data.
Expert mode. You can execute SQL query or SQL script in this action.
Put your SQL expression to SQL Query
for further execution.
You can put only one SQL query or several queries with delimiter ;
.
All queries are executed in one transaction. All changes will rollback if something wrong with one of the executions.
Also if you want to use prepared statements in your query,
you need define prepared statement variables like this way sqlVariableName = @MetadataVariableName:type
where:
sqlVariableName
- variable name in sql expression;MetadataVariableName
- variable name in metadata (it can be the same as sqlVariableName
);type
- type of variable , following types are supported:
For example, for sql expression SELECT * FROM tableName WHERE column1 = 'text' AND column2 = 15
you need to use following template:
SELECT * FROM tableName WHERE column1 = @column1:string AND column2 = @column2:number
and put values into generated metadata.
Input metadata is generated from SQL Query
configuration field if this field contains at least one defined value.
Output metadata is an array of arrays with the result of query execution and depends on the count of SQL queries which were executed. There is an empty array in output metadata, if execution does not return any results. For example, for sql script:
INSERT INTO tableOne (column1, column2) VALUES ('value1', 'value2');
SELECT * FROM table2;
the first sql query INSERT INTO tableOne (column1, column2) VALUES ('value1', 'value2')
does not return any values and
the second sql query SELECT * FROM table2
returns two records.
Output metadata for this example is:
[
[],
[
{
"col2": 123,
"col1": "abc"
},
{
"col2": 456,
"col1": "def"
}
]
]
Expert mode. You can execute Sql Injection in this action. You can not use prepare statement there, for this purpose use General Sql Query Action.
Input metadata contains two fields:
SQL Expression
;Number of retries in case of deadlock transaction
.You can put there SQL query, SQL script or set of SQL queries from the previous step.
You can put only one SQL query or several queries with delimiter ;
.
All queries are executed in one transaction. All changes will rollback if something wrong with one of the executions.
For example, you have some file with defined SQL script and want to execute this. You need to use some component
which can read this file on the previous step and return value like this:
{
"query_string": "INSERT INTO tableOne (column1, column2) VALUES ('value1', 'value2'); SELECT * FROM table2"
}
and in this action you need put query_string
(or some JSONata expression) to Sql Injection string
:
You can specify maximum number of retries, that is intended to help to solve lock's issues in case of a deadlock transaction.
The delay between retries is 1 second.
Default value for this configuration field is 0
, it means, that such behavior is switched off (by default) and no any retry will be performed in case of deadlocked transaction.
Output metadata is an array of arrays with the result of query execution and depends on the count of SQL queries which were executed. There is an empty array in output metadata, if execution does not return any results. For example, for sql script:
INSERT INTO tableOne (column1, column2) VALUES ('value1', 'value2');
SELECT * FROM table2;
the first sql query INSERT INTO tableOne (column1, column2) VALUES ('value1', 'value2')
does not return any values and
the second sql query SELECT * FROM table2
returns two records.
Output metadata for this example is:
[
[],
[
{
"col2": 123,
"col1": "abc"
},
{
"col2": 456,
"col1": "def"
}
]
]
SQL language is pretty extensive and complicated, so we tried to design the templating as minimum invasive so that you could express your self in SQL with maximum flexibility. Implementation of the templating is based on prepared statement and hence should be safe to many SQL injection attacs. Second technology that is used here is JavaScript template literals (we are using this library internally) so you can even property traversal and string manipulation in the templates. Let us demonstrate how the templating works on a sample. Let's take an incoming message like this:
{
"body": {
"name": "Homer Simpson",
"age": 38,
"address": {
"street": "742 Evergreen Terrace",
"city": "Springfield"
}
}
}
If we would like to insert it into the database, we would use following template:
INSERT INTO customers (name, age, address) VALUES (${name},${age},${address.street + address.city})
So as you can see in the example above type conversion will happen automatically and you can traverse and concatenate values.
Now the SELECT example:
SELECT * FROM customers WHERE address LIKE ${'%' + address.city + '%'}
Same as above, concatenation and traversal in action.
There are several limitations of the component:
If in doubt call support.
Apache-2.0 © elastic.io GmbH