Closed dekke046 closed 1 year ago
EOF is sent by oracle server when the client try to send data over closed socket socket closure occur due to network communication error for example
other oracle client like C# will throw network exception that means error in communication occur.
in normal situation failover is activated by database/sql package and reconnection occur
unfortunately fail over is not supported for go_ora.BulkInsert but you can use BulkInsert from inside Exec by passing arrays in the parameter as follow
conn, err := sql.Open("oracle", os.Getenv("DSN"))
// check for error
// first parameter array of ints
par1 := []int64{1, 2, 3}
// second parameter array of string
par2 := []string{ "1", "2", "3"}
// note both have same size
_, err = conn.Exec("INSERT INTO TB1 (COL1, COL2) VALUES (:1, :2)", par1, par2)
// check for error
I am now re-write readme.md file to arrange data and information about using this package
Thanks :-)
Locally this works very well(oracle DB docker container on macBook), I can use huge numbers for the array size of par1/par2, 50000 works fine:
_, err = conn.Exec("INSERT INTO TB1 (COL1, COL2) VALUES (:1, :2)", par1, par2)
When using this in the corporate environment, so go -> OCM -> Database it still does not work. Using array size of 87 records gives after 3132 inserted records an EOF error from the insert statement.
Using even l larger array's failes instantly with the following error: read tcp 10.31.210.132:38304->10.141.64.50:1521: read: connection reset by peer
Not sure how I can debug this.
Using Oracle SQL*Loader in bash on solaris server it works super fast, the CTL file starts with:
OPTIONS (SKIP = 1, DIRECT=TRUE, ROWS=30000, READSIZE=65000000, BINDSIZE=63000000)
LOAD DATA
....
Any thoughts?
Maarten
I think that SQL*Loader is using different technique for sending bulk data to server
I think that SQL*Loader is using different technique for sending bulk data to server
Yes you are right. What we currently see is that we have no issues loading large arrays on a 4 instance exaCC cluster without dataguard. A database with dataguard does not work when having array sizes higher than 80
Not sure in what way dataguard impacts this.
Issue is solved, database teams has changed some settings from DB end and all is working perfectly!
Many thanks for your help!
I will investigate sqlloader for fast loading data my first look I find that it uses direct path my direct path still under construction and still take long time (compare to sqlloader) to copy data
Hi,
I am using the following batched approach to bulk load records in an Oracle ExaCC database cluster:
Why doing this with batches? I have two different databases on the same ExaCC Cluster, PRD1 and PRD2, In one of them I can use 250 as batch size and probably a higher number in the other I need to use 100, when using more the process of loading ends with EOF.
So below the two "cleaned" trace logs loading the same data in the two different databases.
Questions:
GOOD:
BAD: