Open joguess opened 8 hours ago
Yes most of the time is spent in Ora2Pg to transform BLOB in bytea. You have two choices, parallelize a maximum the table export using -J 8 -j 2
for example if you have 16 CPU available. Or if you have the oracle_fdw foreign data wrapper installed set FDW_SERVER name in ora2pg.conf and ora2pg will use it to export your data. You must also limit the export to the BLOB table using the -a command line option or the ALLOW directive.
Hello, I have to export with ora2pg LOBS (one CLOB and one BLOB).
It took me more that 2 hours to extract only 2 tables.
[2024-11-25 14:41:12] [========================>] 2/2 tables (100.0%) end of scanning. [2024-11-25 14:49:24] [========================>] 6598/6576 rows (100.3%) Table 1 (489 sec., 13 recs/sec) [2024-11-25 16:54:26] [========================>] 94669/9415 rows (1005.5%) on total estimated data (7993 sec., avg: 11 tuples/sec)
So the output rate is not good at all and it can't be acceptable. => 7993 sec., avg: 11 tuples/sec.
The size of the table with one column of CLOB is 18 Go. The size of the table with one column of BLOB is 638 Mo.
I have no issue to export the data only without Lobs. It takes : 8918 sec., avg: 23908 tuples/sec which is acceptable.
So to understand why it takes so long time to export LOBS only I did a trace on Oracle and I saw the most of the time this wait event :
*"SQLNet message from client"**
which means , oracle is waiting from the client to do something.
In general, sqlnet message from client is the "idle wait event experienced by the server while waiting for the client to tell it to do something".
My settings are the following :
DATA_LIMIT 500 LONGREADLEN 102400000 BLOB_LIMIT 100
=> Do you have any clue how to optimize the ora2pg extract for my LOBS ?