Closed marcj closed 1 month ago
Hi, Do not stringify the object. Just pass the object in the params.
const doc = { flag: true };
await con.query('INSERT INTO model1 (doc) VALUES ($1)', {params: [doc]});
Or you can define bind parameter explicitly.
const doc = { flag: true };
await con.query('INSERT INTO model1 (doc) VALUES ($1)', {
params: [new BindParam(DataTypeOIDs.jsonb, doc)]
});
thanks! I do use literal values for jsonb too, so string, number, etc. This works fine with pg
so would be lovely if this library could support it as well without having to specify each param manually
pg
uses text format for data transfer which is very slow and cumbersome. postgresql-client
uses binary format which is fast and performant. This is why we created this library.
In text format everything is converted to the text than sent to the server; dates, numbers, binary data etc. This is why you can use any value to send as literal. text format is easy to implement and easy to use.
If there is no intense data flow in your application, you can use other libraries which uses text format. If your application needs performance you should use a library which uses binary format.
How is text format slower? Last time I benchmarked, V8 was notoriously slow converting utf-8 text to binary and vice versa. Do you have any benchmarks showing that binary is actually faster for Postgres? From my experience writing very fast binary encoder in JS, it gets slower and slower compared to JSON.parse for example the longer strings are in there. if your message primarily consists of numbers and small strings (<12 characters), binary is faster though.
You can find some benchmark result in the internet. Here is some links that i found.
https://www.postgresql.org/message-id/466299A7.20403@eba.si
In the text protocol, data is first converted to string on the client side, sent to the server, and parsed by the server again. This workflow couses so much cpu time for both client and server side. And also the data size to be transered over the network gets larger.
Text transfer format: [Client] raw data -> stringify -> transfer -> [Server] - parse -> execute -> stringify -> transfer -> [Client] parse
Binary transfer format: [Client] raw data -> transfer -> [Server] - execute -> transfer -> [Client]
Thank you! I'll play around with your client in binary mode and compare it with our pg implementation in our benchmark suite. Would be awesome if your client is much faster, then we could switch our orm postgres adapter to yours.
It will be very nice to have a benchmark result. Don't forget that text columns (chars, varchars, json, jsonb etc.) will not make a difference, thus they are not binary. Binary data columns (number, date, blob etc) and binary column arrays will make sense. And testing memory usage benchmark would be perfect. Please contact me anytime if you need help.
Thank you! I'll play around with your client in binary mode and compare it with our pg implementation in our benchmark suite. Would be awesome if your client is much faster, then we could switch our orm postgres adapter to yours.
Hi marcj. I'm curious if you were able to create a bechmark result.
Describe the bug
I have a table
and I want to insert data
I get the following error:
(btw, the error stack is wrong. My original call con.query is not in the stack trace)
Expected behavior
Goes through without warning like the
pg
package, which works like that. I do not expect to pass for each parameter an explicit type, as I have often many parameters and this would destroy UX.Desktop (please complete the following information):