Open yymoxiaochi opened 4 years ago
Hi, As you said , when fetch size >1 , connector waits until number of records which is equal to fetch size come from result set which is Oracle logminer view query. As i understand you would like to have ability to capture data with higher fetch size within some specific duration even number of records for fetching not acquired. Am i right ? Thanks
yes, you are right. If there only have 10 records and fetch size is 100, I want to be able to fetch the 10 records immediately, or return the data currently captured before the fetch size reaches the specified time.
Hi, is there any solution for this ?
Hi, It can be possible , but it requires development of course . I am planning some details to achieve this. Thanks.
Is there any general direction? In my test, if DBMS_logmnr.start_logmnr start with endScn, I could capture all the SQL in this range.
What do yo mean by general direction ?
"I am planning some details to achieve this.", Could you tell me in what way you intend to solve this problem? Overridden some of the oralce JDBC implementation methods? :)
Not actually. I am planning to add some timer which controls specified duration and propagates data which are not sent to Kafka topic within JDBC fetch process. Thanks.
Get it ,thanks, Look forward to your next update~~~
Get it ,thanks, Look forward to your next update~~~
Any update?
Not actually. I am planning to add some timer which controls specified duration and propagates data which are not sent to Kafka topic within JDBC fetch process. Thanks.
Any update about it? I found the performance is not good to fetch the data one by one
Hi,Erdem As far as I know,when fetchSize > 1, but sql is 1, logminer will not be able to capture this redo sql due to DB fetch size. But, when fetchSize = 1, the number of SQL is particularly high,Connector captures very slowly. Is there any way to solve this problem? Like Kafka produce's configuration:"batch.size" and "linger.ms" . Thanks.