brianc / node-pg-types

Type parsing for node-postgres
267 stars 54 forks source link

Add support for parsing RECORD types (OIDs 2249 and 2287) #145

Closed thejhh closed 1 year ago

thejhh commented 1 year ago

My reason behind to use ROW instead of JSON aggregators is the fact that I cannot control how PostgreSQL turns BIGINT's as unsafe JavaScript numbers. However, it appears, I cannot do that with record either since PostgreSQL and/or pg module does not provide exact type information that the number inside a record was BIGINT.

If I have correctly understood this, all items in the record string are actually strings or nulls, even if it looks like a number without double quotes. E.g. the type information is stipped away.

Example query for record array

SELECT
  "carts".*,
  array_agg(ROW("cart_items"."cart_item_id", "cart_items"."cart_id", "cart_items"."cart_item_name")) AS "cartItems"
FROM carts
LEFT JOIN "cart_items" ON "carts"."cart_id" = "cart_items"."cart_id"
GROUP BY "carts"."cart_id";

...where the cartItems property would be: [["1234", "..."], ["1235", "..."]]

Example query for single record

SELECT
  "cart_items".*,
  unnest(array_agg(ROW("carts"."cart_id", "carts"."cart_name"))) AS cart
FROM "cart_items"
LEFT JOIN "carts" ON "carts"."cart_id" = "cart_items"."cart_id"
GROUP BY "cart_items"."cart_item_id"

...where the cart property would be: ["1234", "..."]

Tables in examples

CREATE TABLE carts (
  cart_id BIGSERIAL PRIMARY KEY,
  cart_name varchar(255) NOT NULL default ''
);

CREATE TABLE cart_items (
  cart_item_id BIGSERIAL PRIMARY KEY,
  cart_id BIGINT NOT NULL,
  cart_item_name varchar(255) NOT NULL default ''
);
boromisp commented 1 year ago

With records the best you can get is an array of strings, right? Since there is no type information. If that's all you want, you could try plugging in something like this: https://github.com/boromisp/postgres-composite. And for the array of records type you would combine that with https://github.com/bendrucker/postgres-array.

thejhh commented 1 year ago

That is true: Records are just arrays of strings or nulls. The data itself will be a a text string which can be decoded to array of strings and nulls.

I think it is possible to define types to them and I think then PostgreSQL would give more information in the fields section of the response -- probably the name/OID of the type -- but I have not tested this.

I actually made a parser function for record strings yesterday:

I don't have a parser for an array of ROWs though. Those two modules look promising. Although we don't usually want to introduce new dependencies to our zero dep core modules.

(PS: However, for my initial problem I ended up using JSON instead -- just changed those BIGINT fields to use ::text )

bendrucker commented 1 year ago

Based on the discussion here I don't see this getting added. If this were possible it's probably useful:

I think it is possible to define types to them and I think then PostgreSQL would give more information in the fields section of the response -- probably the name/OID of the type -- but I have not tested this.

Otherwise, this seems best as an exercise left to the user, since as noted you can at best parse the original entries and then still need application-specific code to convert to the correct type from a string.