Open Cluas opened 2 years ago
query like this:
"\n\t insert into public.awsdms_ddl_audit\n\t values\n\t (\n\t default,current_timestamp,current_user,cast(TXID_CURRENT()as varchar(16)),tg_tag,0,'',current_schema,_qry\n\t );"
I get this one too
cat ./dump | replibyte -c conf.yaml dump create -i -s postgresql
Same here running locally
fabiojwalter@mac ~/Documents/projects/data/replibyte $ replibyte -c config.yaml dump create ⠒ [00:00:00] [---------------------------------------------------------------------------] 0B/100.00MiB (0s) thread 'main' panicked at 'assertion failed:
(left == right) left:
6, right:
8: Column names do not match values: got 6 names and 8 values', replibyte/src/source/postgres.rs:364:5 ⠦ [00:00:00] [>-------------------------------------------------------------------] 23.05KiB/100.00MiB (10m) 0: _rust_begin_unwind 1: core::panicking::panic_fmt 2: core::panicking::assert_failed_inner 3: core::panicking::assert_failed 4: replibyte::source::postgres::transform_columns 5: replibyte::source::postgres::read_and_transform::{{closure}} 6: dump_parser::utils::list_sql_queries_from_dump_reader 7: <replibyte::tasks::full_dump::FullDumpTask<S> as replibyte::tasks::Task>::run 8: replibyte::commands::dump::run 9: replibyte::main note: Some details are omitted, run with
RUST_BACKTRACE=fullfor a verbose backtrace.
I have a little more information on this - It appears that this check was created because of different parsing problems.
There are times when the column names are not parsed correctly as well as times when the data is not parsed correctly.
If a column name contains characters that it does not recognize as a valid character in the name, it splits the name on that character.
If a value contains data that does not parse correctly then it splits on the parse lines that it contrived.
So this means that if you have a column name or data that it does not parse correctly you will end up with an error here.
For example:
INSERT INTO public.some_table (id, percentage) VALUES (563338, 5e-06);
Will show up as having too much data for the row.
Conversely if you have:
INSERT INTO public.some_table (id, {percentage=someweirdname}) VALUES (563338, 1.0);
You will end up with too many columns and not enough data.
The root cause is that the parsing functions for both the column names and data do not adhere to the SQL spec - at least for Postgresql in my testing.
Additional note for whomever is watching along, we are having to abandon the use of Replibyte. There are far too many issues and we cannot proceed:
I had so much hope for this product. I think the concept is good but needs a total architectural overhaul. I think it would be better as a tool that takes an existing database that has been restored and edits the data in place rather than trying to modify the data in-flight.