Open jberkus opened 10 years ago
Yeah, a script would be nice.
I'm also thinking that scripting backup-by-copying-table-files is a desirable approach for cstore. If you're doing COPY TO, then you're uncompressing the data in order to recompress it.
I added this to the list of features to be added in v1.3. v1.3 should be released sometime in the 1st quarter of 2015.
postponed to later release
So I assume we can't use pg_dump to create backups of the cstore files?
We have a backup/mirror script for cstore tables. But it assumes that the database doesn't change while the mirror script is running...
Script to translate between cstore filenames and a readable form:
SELECT
(SELECT oid
FROM pg_database
WHERE datname = current_database()) AS dboid,
ftrelid,
nspname,
relname
FROM pg_foreign_table
JOIN pg_class ON pg_class.oid = ftrelid
JOIN pg_namespace ON pg_namespace.oid = relnamespace;
Using this query, it's basically
[... pg_dump ...]
echo "Copying cstore files to tempdir..."
${ssh} ${production_host} "rm -rvf \"${dest_dir}/cstore_fdw/\""
${ssh} ${production_host} "mkdir -pv \"${dest_dir}/cstore_fdw/\""
${ssh} ${production_host} "cat /bi/mirror/get-cstore-tables.sql | ${psql} --user=dwh --tuples-only --no-align --field-separator=' ' --quiet ${production_database} | awk '{print \"cp -v {{pillar['pg-data-dir']}}/cstore_fdw/\"\$1\"/\"\$2\" ${dest_dir}/cstore_fdw/\"\$3\".\"\$4\";\ncp -v {{pillar['pg-data-dir']}}/cstore_fdw/\"\$1\"/\"\$2\".footer ${dest_dir}/cstore_fdw/\"\$3\".\"\$4\".footer\"}' | bash"
${ssh} ${production_host} "chmod -R 755 \"${dest_dir}/cstore_fdw/\""
[...]
echo "rsync cstore files to destination..."
${ssh} ${dest_host} "rm -rvf \"${dest_dir}/cstore_fdw/\""
${ssh} ${dest_host} "mkdir -pv \"${dest_dir}/cstore_fdw/\""
${ssh} ${dest_host} "rsync -av --delete ${production_host}:${dest_dir}/cstore_fdw/ ${dest_dir}/cstore_fdw/"
${ssh} ${dest_host} "chmod -R 755 ${dest_dir}/cstore_fdw"
[...]
[... pg_restore ...]
echo "Restoring cstore files from tempdir..."
${ssh} ${dest_host} "cat /bi/mirror/get-cstore-tables.sql | ${psql} --tuples-only --no-align --field-separator=' ' --quiet ${dest_tmp_db} | awk '{print \"cp -v ${dest_dir}/cstore_fdw/\"\$3\".\"\$4\" {{pillar['pg-data-dir']}}/cstore_fdw/\"\$1\"/\"\$2\";\ncp -v ${dest_dir}/cstore_fdw/\"\$3\".\"\$4\".footer {{pillar['pg-data-dir']}}/cstore_fdw/\"\$1\"/\"\$2\".footer\"}' | bash"
@Dezzsoke correct, you cannot.
FYI: if you install CStore files to the default location, the pg_basebackup binary backup will back them up. It's only pg_dump which will not.
FYI: when working with partitioned tables you want to use SELECT * FROM ONLY table. Otherwise you'd dump the data from the table twice if you copy it out from both the master and the partitions. (If you managed to do this you can do a TRUNCATE ONLY to drop out the data you don't need.
$subj pretty much says it. How do we back up the cstore_fdw files? Ideas?