Open rmadiwaledba opened 6 years ago
hi @rmadiwaledba thanks for the report, I can reproduce it in our end. looks like our copy code does not play nice with serial fields.
As you stated, copy works when you do not have a serial field. As a workaround, you can copy data into a regular postgresql table, and insert into cstore table using insert into cstore_table select * from regular_table
.
We will look into fixing this in our next release.
Yes, correct it is not handling the serial data types.
Can it be declared as bug?
yes, it is a bug
I am also running into this issue.
When I was trying to load the data in a table which contains serial data type it is resting postmaster.
Here are the steps for reproducing:
Here is our test case:
`test=# \des List of foreign servers Name | Owner | Foreign-data wrapper ---------------+----------+---------------------- cstore_server | postgres | cstore_fdw (1 row)
test=# \det List of foreign tables Schema | Table | Server
--------+-------+--------------- public | c1 | cstore_server (1 row)
test=# \d+ c1 Foreign table "public.c1" Column | Type | Modifiers | FDW Options | Storage | Stats target | Description --------+-------------------+-------------------------------------------------+-------------+----------+--------------+------------- id | integer | not null default nextval('c1_id_seq'::regclass) | | plain | | name | character varying | | | extended | | Server: cstore_server FDW Options: (compression 'pglz')
test=# \q [postgres@localhost ~]$ vi c1.csv [postgres@localhost ~]$ cat c1.csv test test
[postgres@localhost ~]$ psql test psql (9.6.6) Type "help" for help.
test=# test=# test=# test=# copy c1(name) from '/home/postgres/c1.csv' WITH csv; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> \q
From log file same error what we have seen in you case.
< 2017-12-21 19:30:02.792 IST > LOG: server process (PID 90463) was terminated by signal 11: Segmentation fault < 2017-12-21 19:30:02.792 IST > DETAIL: Failed process was running: copy c1(name) from '/home/postgres/c1.csv' WITH csv; < 2017-12-21 19:30:02.792 IST > LOG: terminating any other active server processes < 2017-12-21 19:30:02.796 IST > WARNING: terminating connection because of crash of another server process < 2017-12-21 19:30:02.796 IST > DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. < 2017-12-21 19:30:02.796 IST > HINT: In a moment you should be able to reconnect to the database and repeat your command. < 2017-12-21 19:30:02.802 IST > WARNING: terminating connection because of crash of another server process < 2017-12-21 19:30:02.802 IST > DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. < 2017-12-21 19:30:02.802 IST > HINT: In a moment you should be able to reconnect to the database and repeat your command. < 2017-12-21 19:30:02.833 IST > FATAL: the database system is in recovery mode < 2017-12-21 19:30:02.834 IST > LOG: all server processes terminated; reinitializing < 2017-12-21 19:30:02.967 IST > LOG: database system was interrupted; last known up at 2017-12-21 19:22:30 IST < 2017-12-21 19:30:03.016 IST > LOG: database system was not properly shut down; automatic recovery in progress < 2017-12-21 19:30:03.032 IST > LOG: redo starts at 0/1AC9658 < 2017-12-21 19:30:03.032 IST > LOG: invalid record length at 0/1ACB380: wanted 24, got 0 < 2017-12-21 19:30:03.032 IST > LOG: redo done at 0/1AC9658 < 2017-12-21 19:30:03.052 IST > LOG: MultiXact member wraparound protections are now enabled < 2017-12-21 19:30:03.066 IST > LOG: database system is ready to accept connections < 2017-12-21 19:30:03.099 IST > LOG: autovacuum launcher started
If I replace serial data type with integer which is not using the sequence to generate the values(automatically) then we are not facing this issue.
(3 rows)`