My original goal was to populate a FDW table from a CSV file. But Postgres doesn't allow that. At least not in my ver 9.4.1.
So I created a real table like the FDW table. I populated the real table from the CSV file. And then tried to "copy" from the real table to the FDW table. This works up to about 11,000 rows and then I get the following error. The strange part is that my insert() method doesn't do anything except log that it was called.
I created a second clone table and copied from one real table to another real table. A real copy. That works.
Here are the log messages when I tried to copy from a real table (temp_mcmcond) into a FDW table (mcmcond). With log_min_messages = debug1. The copy failed even though my insert() method only logs a message when called. It doesn't perform an insert. I was just testing. This command does work if I reduce the number of rows in the source table.
Fatal Python error: deallocating None
LOG: server process (PID 20258) was terminated by signal 6
DETAIL: Failed process was running: INSERT INTO mcmcond (code,type,class_only,gender,preg,lact,from_age,to_age,age_unit,duration,res) SELECT * FROM temp_mcmcond
LOG: terminating any other active server processes
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
LOG: all server processes terminated; reinitializing
DEBUG: mmap with MAP_HUGETLB failed, huge pages disabled: Out of memory
LOG: database system was interrupted; last known up at 2015-08-24 15:37:19 EDT
DEBUG: checkpoint record is at 0/5BA4F18
DEBUG: redo record is at 0/5BA4F18; shutdown TRUE
DEBUG: next transaction ID: 0/4049; next OID: 51556
DEBUG: next MultiXactId: 1; next MultiXactOffset: 0
DEBUG: oldest unfrozen transaction ID: 711, in database 1
DEBUG: oldest MultiXactId: 1, in database 1
DEBUG: transaction ID wrap limit is 2147484358, limited by database with OID 1
DEBUG: MultiXactId wrap limit is 2147483648, limited by database with OID 1
DEBUG: starting up replication slots
LOG: database system was not properly shut down; automatic recovery in progress
DEBUG: resetting unlogged relations: cleanup 1 init 0
LOG: unexpected pageaddr 0/3BA6000 in log segment 000000010000000000000005, offset 12214272
LOG: redo is not required
DEBUG: resetting unlogged relations: cleanup 0 init 1
DEBUG: performing replication slot checkpoint
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
My original goal was to populate a FDW table from a CSV file. But Postgres doesn't allow that. At least not in my ver 9.4.1.
So I created a real table like the FDW table. I populated the real table from the CSV file. And then tried to "copy" from the real table to the FDW table. This works up to about 11,000 rows and then I get the following error. The strange part is that my
insert()
method doesn't do anything except log that it was called.I created a second clone table and copied from one real table to another real table. A real copy. That works.
Here are the log messages when I tried to copy from a real table (temp_mcmcond) into a FDW table (mcmcond). With
log_min_messages = debug1
. The copy failed even though myinsert()
method only logs a message when called. It doesn't perform an insert. I was just testing. This command does work if I reduce the number of rows in the source table.