darold / ora2pg

Ora2Pg is a free tool used to migrate an Oracle database to a PostgreSQL compatible schema. It connects your Oracle database, scan it automatically and extracts its structure or data, it then generates SQL scripts that you can load into PostgreSQL.
http://www.ora2pg.com/
GNU General Public License v3.0
978 stars 341 forks source link

BLOB Migration Issue #1699

Closed guptasai675 closed 4 months ago

guptasai675 commented 9 months ago

Hi darold,

Am attempting to migrate 4GB of objects using OID datatype with ora2pg but am receiving an error. Out of memory issues, and when I set longread length to 4GB, I receive the error message ORA-01062: cannot allocate memory for define buffer.

Could you please provide the solution how can I migrate the 4GB blob.

Used Below parameter to migrate the data Used only DATA_limit to 1 and have good resource to handle the migration 64GB&16CPU.

ORACLE_DSN dbi:Oracle:host=< hots>;SID=;port=1521 ORACLE_USER system ORACLE_PWD id SCHEMA ddddd ALLOW DOC_STATUS_CONTENTS DISABLE_SEQUENCE 1 DISABLE_TRIGGERS 1 DATA_LIMIT 1 BLOB_LIMIT 5

LONGREADLEN 1572864000

LONGTRUNCOK 0 USE_LOB_LOCATOR 1 LOB_CHUNK_SIZE 512000 MODIFY_TYPE DOC_STATUS_CONTENTS:DOC_ZIP_CONTENT:OID,DOC_STATUS_CONTENTS:DOC_PDF_CONTENT:OID,DOC_STATUS_CONTENTS:DOC_PDF_CONTENT_BIS:OID MODIFY_STRUCT DOC_STATUS_CONTENTS(DOC_PDF_CONTENT_BIS) FILE_PER_CONSTRAINT 1 FILE_PER_INDEX 1 FILE_PER_FKEYS 1 FILE_PER_TABLE 1 FILE_PER_FUNCTION 1 SKIP checks

guptasai675 commented 9 months ago

Hi darold,

Could you please provide your comments .

darold commented 8 months ago

Hi, with 64GB of memory you shoud not experiencing an OOM even with a higher DATA_LIMIT. What do you mean by "using OID datatype" are you talking about the --blob_to_lo option? By default BLOB are translated into bytea by Ora2Pg.

When a table have BLOB and the parameter BLOB_LIMIT is set Ora2Pg will use it, so here you will export 5 row at a time which is low. What is the maximum size of a BLOB in this table?

darold commented 8 months ago

And also what is the ora2pg command you are using?

guptasai675 commented 8 months ago

Hi Darold, The Maximum length of the BLOB is 3GB Yes am using --blob_to_lo option and below is the command were used for export nohup ora2pg -t COPY -o data_DOC_STATUS_CONTENTS.sql -b ${WORK_DIR}/${P_CODE}/data -c ${WORK_DIR}/${P_CODE}/config/ora2pg.conf -l ${LOG_1} -d --blob_to_lo &

darold commented 8 months ago

I think first that you should use direct import to PostgreSQL and not output to file. You should also parallelize the export from Oracle using the -J command line option provide that you have a column of NUMBER data type that has a unique constraints (PK or unique index) defined on this table.

guptasai675 commented 8 months ago

Yes, that's right. But we can't import directly we have some network restriction on between the both datacenters so we're exporting to a local file and trying the --J option, but we're still getting the same problem. ORA-01062: define buffer any memory

when I trigger export , the export it self aborting with following error ORA-01062: define buffer any memory

darold commented 8 months ago

I think the problem is not related to Ora2Pg but to Oracle, the OOM occurs at Oracle side. I don't know the Oracle version and the available memory on this server but you should look at Oracle side on what happen. You should also try to export using a WHERE clause in ora2pg.conf to extract a single row with a huge BLOB to see if this is the full extraction that has problem or just even a single line extraction reports an OOM.

guptasai675 commented 8 months ago

That sounds good. I'll check the Oracle server end. Like you said, I'll try with a single export row.