nuodb / migration-tools

Migration tools for NuoDB
http://nuodb.github.com/migration-tools/
BSD 3-Clause "New" or "Revised" License
27 stars 10 forks source link

Dump of postgreSql database: relation "public.null" does not exist #36

Open ghost opened 10 years ago

ghost commented 10 years ago

I am trying to run some tests on NuoDB and started by migrating a Database I already had on Postgres on the server. Here are the steps I followed:

This gave me the error:

Execution failed com.nuodb.migrator.utils.ReflectionException: Failed to invoke public abstract java.sql.ResultSet java.sql.Statement.executeQuery(java.lang.String) throws java.sql.SQLException method on object of org.apache.commons.dbcp.DelegatingStatement class (...) Caused by: org.postgresql.util.PSQLException: ERROR: relation "public.null" does not exist Position: 15

Of this I took the part of "public.null does not exist" but I don't have a table called null as you can imagine.

What can be wrong?

msullivan-nuodb commented 10 years ago

Please enable logging, re-run the migrator tool and attach a complete log file.

Logging can be enabled by editing conf/log4j.properties and commenting out this line:

log4j.logger.com.nuodb.migrator.jdbc.connection=none

The log file can be found in your /tmp directory. The log filename is nuodb-migrator.log

Best Regards, Madolyn Sullivan | NuodB Customer Solutions Engineer The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Here is the log file:

http://1drv.ms/1nyoURs

msullivan-nuodb commented 10 years ago

Hello Hugobento, Thank you for uploading your log file. This is very helpful. If you look at the log file you can see what the migrator tool was doing when it aborted.

23-05-2014 09:44:23 (TRACE) (com.nuodb.migrator.jdbc.connection.SimpleQueryLogger.log(SimpleQueryLogger.java:46)) - WITH SEQUENCES AS (SELECT N.NSPNAME AS SCHEMA_NAME, C.RELNAME AS TABLE_NAME, A.ATTNAME AS COLUMN_NAME, SUBSTRING(PG_GETEXPR(DEF.ADBIN, DEF.ADRELID) FROM 'nextval(''(.)''::._)') AS SEQUENCE_NAME FROM PG_CATALOG.PG_NAMESPACE N JOIN PG_CATALOG.PG_CLASS C ON (C.RELNAMESPACE = N.OID) JOIN PG_CATALOG.PG_ATTRIBUTE A ON (A.ATTRELID=C.OID) LEFT JOIN PG_CATALOG.PG_ATTRDEF DEF ON (A.ATTRELID=DEF.ADRELID AND A.ATTNUM = DEF.ADNUM) WHERE A.ATTNUM>0 AND NOT A.ATTISDROPPED AND ADSRC IS NOT NULL AND SUBSTRING(ADSRC FROM 0 FOR 9)='nextval(') SELECT * FROM SEQUENCES WHERE SCHEMA_NAME='public' 23-05-2014 09:44:23 (TRACE) (com.nuodb.migrator.jdbc.connection.SimpleQueryLogger.log(SimpleQueryLogger.java:46)) - SELECT * FROM "public".null

First, it was selecting sequences from the system metadata tables. This sql returned something that the migrator interpreted as a sequence named NULL. It then built the SQL statement "select * from public.null". Not sure if this is caused by something in postgresql "version 8.4".

Can you try running the migrator tool DUMP with one additional command option: --meta.data.sequence=false and see if this by-passes the issue for you. This will validate what we believe to be your issue.

Also, one unrelated suggestion. the command option: --output.path= The should be the name of the path only, not path/filename. the migrator dump process writes an xml file, called dumpup.cat, and a data dump file for each table into this directory. The migrator load process looks to read this by passing the option: --input.path=

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Hi again,

Trying that an error comes up of java.lang.OutOfMemoryError: Java heap space. I was trying to pass an argument on the command so that java would use more that the default 64mb like this:

./nuodb-migrator dump --source.driver=org.postgresql.Driver --source.url=jdbc:postgresql://localhost/delta_production --source.username=delta --source.password=password --source.schema=public --meta.data.sequence=false --output.path=/tmp/ -XX:MaxPermSize=512m but it is not valid.

suggestions?

madscientist commented 10 years ago

Hi hugobento; Unfortunately it won't work to pass JVM options on the normal nuodb-migrator command line. Instead, you should set the environment variable JAVA_OPTS to contain any JVM options you want, like this:

JAVA_OPTS=-XX:MaxPermSize=512m ./nuodb-migrator dump ...
ghost commented 10 years ago

Java heap space error again. Log file here

http://1drv.ms/1hqt4Hq

Is there any other way I can migrate a postgres database into a NuoDB database? Or dump different kind of files...I mean this method of csv dumps seems to consume a lot of memory. The latest try was going well, I had already some csv files (11 to be exact) but I guess it chocked on a larger table.

ghost commented 10 years ago

Just a further explanation, the table where it breaks has (in this case) 4,5 million rows. It can have 10 times this amount in production environment.

msullivan-nuodb commented 10 years ago

Hi Hugobento,

The default nuodb-migrator script is not optimally tuned for very large databases out of the box; we recommend a couple tweaks:

in the bin/nuodb-migrator script that you are running, change the variable MAX_HEAP_SIZE: MAX_HEAP_SIZE="10G" (the default value set is "256M") and add: JAVA_OPTS="$JAVA_OPTS -XX:+UseG1GC"

There is also a dump command option: --threads=value This is the number of number of concurrent processing threads. (i.e. a value of 1 would cause single thread processing). The default value is the number of CPU cores. You could try using: --threads=1

Try dumping just this one very large table, using the dump command option: --table=table-name (the name of this very large table) You can pass one or more table-names to this option: --table=table1,table2,table3....

NOTE: if you have constraints on this table referencing other tables in your database, you may have to use the dump command option: --meta.data.foreign.key=false this will not migrate foreign key constraints. You can migrate these after all the table data has been migrated.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

msullivan-nuodb commented 10 years ago

Hi Hugobento, If you would like, we can do some further analysis on your issue. Would it be possible for you to re-run your migrator DUMP command and send us a heap dump. This can be generated by using the jmap utility:

jmap -dump:format=b,file=/tmp/dump.hprof

where is the "process id" for the migrator-tool process that you are running.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Hello,

Thanks for your response/availability. I will try to do as you ask but with one restraint, I do not have 10G of memory on this machine available...

One suggestion (from an outsider point of view of course) would be to try to handle this type of process with streaming capabilities or restraining within a buffer size.

I'll keep you posted. Regards.

ghost commented 10 years ago

Same result, I was only able to go up to 2G of memory. As for a previous question. Any other way of importing/migrating a postgresSQL database into NuoDB?

Regards.

msullivan-nuodb commented 10 years ago

Hi Hugobento, The migrator-tool is the best way to import/migrate your database to NuoDB. It will generate the schema, converting datatypes as required, and load the data for you. It will also create the table/column constraints and other indexes.

By default the PostgreSQL driver collects all the results for the query at once. For a large database, this can result in the issue you are seeing. Our engineer has provided a fix that will override this driver default behavior. This fix is available for download, along with instructions, here: https://github.com/nuodb/migration-tools/#dump-data-from-an-existing-database Let me know if you have any questions.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Hello msullivan,

I will test this around. Anyhow, I was checking the code that was altered and I have to tell you I'm a bit worried about some code, specifically about this statement.setFetchSize(streamResults ? 1 : 0); that seems to get all the rows.....or 1 at a time. (!?). Am I assuming this right?

I'll test it and get back to you. Regards.

msullivan-nuodb commented 10 years ago

Hello Hugobento I wanted to follow up with you. I believe the migrator tool was running out of memory on this sql statement: SELECT {columns} FROM "public"."daily_non_sales"

Did this new version of the NuoDB Migrator tool solve the issue?

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Hello. Only today had the chance to test it out. Some good pointers, but still a bug. here is the file log: http://1drv.ms/1kpY2FE

msullivan-nuodb commented 10 years ago

Hello Hugobento, Our engineering team is reviewing your log file. I will update you as soon as there is a resolution for your issue.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

msullivan-nuodb commented 10 years ago

Hello Hugobento, We suspect the issue you are having is related to the conversion of a specific data type. Could you please provide us with the DDL used to create this table so that we may attempt to reproduce the issue you are having.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

msullivan-nuodb commented 10 years ago

Hello Hugobento, I would like to followup on this issue you are having with migrating your postgreSql database to NuoDB. We suspect that the error you are seeing is related to a particular data type that our migrator-tool does not currently support. If you could provide us with the CREATE TABLE ddl script for your table, daily_movements, we would like to work with you to resolve this issue. Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

msullivan-nuodb commented 10 years ago

Hello Hugobento, Please email support@nuodb.com when you are ready to proceed with your postgreSQL database migration. We will be happy to help you.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

I am sorry for the delay, The script for the creation of the table is available here:

http://1drv.ms/1zpp0ST

msullivan-nuodb commented 10 years ago

Hello Hugobento, I have been able to recreate your issue using a table created with this DDL.

First, i want to make sure you are using the most recent version of the migrator tool. This most recent version of the migrator-tool has fixed the issue you were having initially with migrating sequences in PostgreSQL verison 8.x. You can download and build this migrator-tool version from GitHub: https://github.com/nuodb/migration-tools/#dump-data-from-an-existing-database

Next, remove the option "--meta.data.sequence=false" from your DUMP command. This table, daily_movements, has a column defined as SERIAL. Therefore, you need to migrate sequences to migrate this table.

I was able to recreate your issue and I was able to verify that removing this DUMP command option will fix the issue for you. Feel free to email support@nuodb.com for any other issues you may have with the nuodb-migrator tool or NuoDB.

Best Regards, Madolyn Sullivan | NuoDB Customer Solutions Engineer Be sure to visit our Online Community

The information contained in this email is confidential and proprietary and intended for the sole use of the intended recipient. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this email in error, please notify the sender and delete all copies.

ghost commented 10 years ago

Hello Madolyn, I'll try that and get back to you, I remind your nevertheless that the command --meta.data.sequence=false was recommended by you to resolve the initial bug. Hope for the best this time though. Cheers.