Closed fitlcarlos closed 7 months ago
Hi @fitlcarlos sorry for late replay according to this doc of scanner interface oracle types should by mapped to one of the following:
// int64
// float64
// bool
// []byte
// string
// time.Time
// nil - for NULL values
so these oracle types will be converted internally into the corresponding go types
oracle type | go type |
---|---|
CLOB | string |
NCLOB | string |
BLOB | []byte |
if your data is larger than 32kb so you should use url option lob fetch=post
it is not the default option because it slightly slow down the query. (note default will return clob
as varchar2
and nclob
as nvarchar2
and blob
as raw
)
I am working on increase number of bytes received during lob prefetch from 32KB to 1 GB (same as official drivers)
I am working on increase number of bytes received during lob prefetch from 32KB to 1 GB (same as official drivers)
Thank's
fixed in v2.8.8
I have a query with several fields and these fields are of type string, date, int and clob and this query will return several rows.
I'm running a DB.query and returning the rows, but the CLob field is already returned to me as a string in the row and does not contain all the bytes of the CLb.
Note: I can't type my fields as go_ora.clob because of my logic.
The logic is as follows:
columns[0] <----- Here it already brings me the CLob column as a string because the field type is ANY.
Internally in go_ora, it cannot identify that the field is of type OCIClobLocator and perform the correct conversion bringing all the CLob bytes?