bill-ramos-rmoswi / babelfish_postgresql_towel

Bill Ramos's knowledge base for tips, tricks, and best practices for migrating SQL Server solutions to Babelfish for PostgreSQL and the open-source WiltonDB for Windows
Apache License 2.0
1 stars 1 forks source link

Storing Large XML in PostgreSQL #11

Open Aravind2601 opened 4 months ago

Aravind2601 commented 4 months ago

Hi @bill-ramos-rmoswi, we have used some large xml data as column values in sql table . We have using the assemblies to create compression and decompression function to store the large xml data as compressed string in sql .Since assemblies are not supported in babelfish we are trying to replace the function .

Question - 1.Do we need to compress and decompress the large xml data in PostgreSQL? if yes 2..Is there any option to compress and decompress the xml in PostgreSQL.

Aravind2601 commented 4 months ago

@bill-ramos-rmoswi any update on this ?

staticlibs commented 4 months ago

@Aravind2601 , I am not aware of explicit compression support in vanilla Postgres/Babelfish without additional extensions. Though Babelfish XML data will be stored using TOAST and it is compressed by default. I suggest to look at the size of TOASTed data for specific tables to find out whether additional compression (for example on client app side) is required. TOAST size for particular table can be looked up from pg_class like this:

select relname, pg_size_pretty(sum(pg_table_size(reltoastrelid)))
from pg_class
where relkind = 'r'
and relname = 'tab1'
group by relname

Listing TOAST tables for particular table:

select t1.relname, t2.relname
from pg_class t1
inner join pg_class t2
on t1.reltoastrelid = t2.oid
where t1.relkind = 'r'
and t2.relkind = 't'
and t1.relname = 'tab1'

Also default_toast_compression option may be useful. WiltonDB is compiled with both LZ4 and Zstandard compression libs included.