erdemcer / kafka-connect-oracle

Kafka Source Connector For Oracle
Apache License 2.0
349 stars 167 forks source link

ORA-01292: no log file has been specified for the current LogMiner session #39

Open hello-llc opened 4 years ago

hello-llc commented 4 years ago

ORA-01292: no log file has been specified for the current LogMiner session ORA-06512: at "SYS.DBMS_LOGMNR", line 58 ORA-06512: at line 1

what's the matter?

erdemcer commented 4 years ago

Hi, Could you please give some information about your source config and source database version ? Thanks

hello-llc commented 4 years ago

Hi, Could you please give some information about your source config and source database version ? Thanks

Hi, thanks for your reply!

{ "name":"oracle-logminer-connector", "config":{"connector.class":"com.ecer.kafka.connect.oracle.OracleSourceConnector", "db.name.alias":"test_es", "tasks.max":"1", "topic":"es_test", "db.name":"xxx", "db.hostname":"xxx", "db.port":"1521", "db.user":"xxx", "db.user.password":"xxx", "db.fetch.size":"50", "table.whitelist":"xxx", "parse.dml.data":"true", "reset.offset":"false", "start.scn":"", "multitenant":"false", "zk_server":"xxx", "sid":"" } }

erdemcer commented 4 years ago

And version ?

hello-llc commented 4 years ago

And version ?

11.2.0.4

erdemcer commented 4 years ago

Could you please set reset.offset = true and try again ?

hello-llc commented 4 years ago

Could you please set reset.offset = true and try again ?

Thanks!

Now, there's another problem:

ERROR SQL error during poll (com.ecer.kafka.connect.oracle.OracleSourceTask:249) java.sql.SQLException: ORA-01002: fetch out of sequence

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
    at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:220)
    at oracle.jdbc.driver.T4CCallableStatement.fetch(T4CCallableStatement.java:1061)
    at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)
    at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015)
    at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979)
    at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579)
    at com.ecer.kafka.connect.oracle.OracleSourceTask.poll(OracleSourceTask.java:197)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
erdemcer commented 4 years ago

Hi again, I have 2 more questions ?

  1. Is your system RAC ?
  2. Does this error occur immediately starting connector or after some time ? I am looking some solutions in order to reproduce this issue . Thanks
hello-llc commented 4 years ago

Hi again, I have 2 more questions ?

1. Is your system RAC ?

2. Does this error occur immediately starting connector  or after some time ?
   I am looking some solutions in order to reproduce this issue .
   Thanks

1.it's RAC 2.this error occur after some time, about two days

moreover: "Database connection pool " , Is it a solution?

erdemcer commented 4 years ago

You said after 2 days error occured. During this 2 days are DML operations being to continue ? Or is there any idle time ?

Previous issues regarding this error were on Rac systems. Nowadays i am preparing RAC test environment. After this i think issue can be resolved.

hello-llc commented 4 years ago

You said after 2 days error occured. During this 2 days are DML operations being to continue ? Or is there any idle time ?

Previous issues regarding this error were on Rac systems. Nowadays i am preparing RAC test environment. After this i think issue can be resolved.

Hi! DML operations continue

erdemcer commented 4 years ago

Hi, test environment has been prepared for RAC issue. I will update this issue after some tests. Thanks

hello-llc commented 4 years ago

Hi, test environment has been prepared for RAC issue. I will update this issue after some tests. Thanks

Hi, Thanks for your help!

hello-llc commented 4 years ago

Hi,

java.sql.SQLException: ORA-01291: missing logfile ORA-16241: Waiting for gap log file (thread# 1, sequence# 507715)

this error before ' ORA-01002: fetch out of sequence' occurred

erdemcer commented 4 years ago

Thanks for update. This error shows logminer(connector) tries to open logfile 507715 but it is not available on archive log destination , Could you please be sure about that.If it is not exist you have to restore them logfiles to archivelog destination.

hello-llc commented 4 years ago

Thanks for update. This error shows logminer(connector) tries to open logfile 507715 but it is not available on archive log destination , Could you please be sure about that.If it is not exist you have to restore them logfiles to archivelog destination.

Hi, Is there any solution?

erdemcer commented 4 years ago

Did you check the log file is really exists or not ? If it is not the only solution is to restore the related archive log and following files .

hello-llc commented 4 years ago

Did you check the log file is really exists or not ? If it is not the only solution is to restore the related archive log and following files .

Thanks, Will you update it?

erdemcer commented 4 years ago

Hi, According to your last error , there is no need to update code. does error still occur ?

magius82 commented 4 years ago

i have the same error when i set "reset.offset":"false", "start.scn":"xxxxxx"

if "reset.offset":"true", "start.scn":"" it works. db version is 12.2 , standalone database installed on a oracle linux 7 os.

erdemcer commented 4 years ago

Hi, Have you checked redo or archive log file exists in location which contains "start.scn" ?

magius82 commented 4 years ago

my bad, the archivelogs were old and deleted. but there is also another problem i noticed. i used the log miner manually to check the results produced by kafka. i set the scn xxxxx , which exists in the archivelogs. i am capturing for only one table. when i check with logminer , 4-5 dml's issued after scn xxxxx , but connector is not capturing all the dml's, it captures 2-3 dml. some dml's are absent. then if i did an update/insert/delete on the table listened, its captured by the connector. as i mentioned some dml's are lost (is not captured) after the start.scn values. (by the way i consume the topic into a file). what can be the reason or am i doing something wrong.

erdemcer commented 4 years ago

Hi, Have you noticed statements in connectorog file like "skipping records" ?

magius82 commented 4 years ago

nope, output is like below:

2020-06-16 11:24:03,146] INFO Kafka version: 2.5.0 (org.apache.kafka.common.utils.AppInfoParser:117) [2020-06-16 11:24:03,146] INFO Kafka commitId: 66563e712b0b9f84 (org.apache.kafka.common.utils.AppInfoParser:118) [2020-06-16 11:24:03,146] INFO Kafka startTimeMs: 1592295843146 (org.apache.kafka.common.utils.AppInfoParser:119) [2020-06-16 11:24:03,153] INFO Created connector oracle-logminer-connector (org.apache.kafka.connect.cli.ConnectStandalone:112) [2020-06-16 11:24:03,153] INFO OracleSourceConnectorConfig values: db.fetch.size = 1 db.hostname = 10.8.15.22 db.name = XYZDEV db.name.alias = DEV db.port = 1521 db.user = system db.user.password = XYZ***** multitenant = false parse.dml.data = true reset.offset = false start.scn = 171808250 table.blacklist = table.whitelist = TEST.T_SRC_ONUR topic = cdctest (com.ecer.kafka.connect.oracle.OracleSourceConnectorConfig:347) [2020-06-16 11:24:03,153] INFO Oracle Kafka Connector is starting on DEV (com.ecer.kafka.connect.oracle.OracleSourceTask:112) [2020-06-16 11:24:03,155] INFO [Producer clientId=connector-producer-oracle-logminer-connector-0] Cluster ID: ICLkYVMoS2mP1K1IwgvQWg (org.apache.kafka.clients.Metadata:280) [2020-06-16 11:24:03,499] INFO Connected to database version 122010 (com.ecer.kafka.connect.oracle.OracleSourceTask:117) [2020-06-16 11:24:03,499] INFO Starting LogMiner Session (com.ecer.kafka.connect.oracle.OracleSourceTask:120) [2020-06-16 11:24:03,523] INFO Offset values , scn:172135229,commitscn:172135252,rowid:AABG2dAAWAACNQTAAE (com.ecer.kafka.connect.oracle.OracleSourceTask:141) [2020-06-16 11:24:03,526] INFO Captured last SCN has first position:172053870 (com.ecer.kafka.connect.oracle.OracleSourceTask:156) [2020-06-16 11:24:03,526] INFO Resetting offset with specified start SCN:171808250 (com.ecer.kafka.connect.oracle.OracleSourceTask:161) [2020-06-16 11:24:03,526] INFO Commit SCN : 172135252 (com.ecer.kafka.connect.oracle.OracleSourceTask:186) [2020-06-16 11:24:03,526] INFO Log Miner will start at new position SCN : 171808250 with fetch size : 1 (com.ecer.kafka.connect.oracle.OracleSourceTask:187) [2020-06-16 11:24:13,152] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:24:13,153] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:24:19,893] INFO Logminer started successfully (com.ecer.kafka.connect.oracle.OracleSourceTask:195) [2020-06-16 11:24:19,894] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:214) [2020-06-16 11:24:19,896] INFO Getting dictionary details for table : T_SRC_ONUR (com.ecer.kafka.connect.oracle.OracleSourceConnectorUtils:153) [2020-06-16 11:24:23,153] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:24:23,153] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:24:23,161] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Finished commitOffsets successfully in 8 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:523) [2020-06-16 11:24:33,161] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:24:33,161] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:24:43,162] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:24:43,162] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:24:47,271] INFO [GroupMetadataManager brokerId=0] Group console-consumer-96552 transitioned to Dead in generation 2 (kafka.coordinator.group.GroupMetadataManager) [2020-06-16 11:24:47,273] INFO [GroupMetadataManager brokerId=0] Group console-consumer-38737 transitioned to Dead in generation 2 (kafka.coordinator.group.GroupMetadataManager) [2020-06-16 11:24:47,273] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2020-06-16 11:24:53,163] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:24:53,163] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:25:03,164] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:25:03,164] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:25:13,165] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:25:13,165] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:25:23,166] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:25:23,166] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:25:33,166] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2020-06-16 11:25:33,167] INFO WorkerSourceTask{id=oracle-logminer-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2020-06-16 11:25:43,167] INFO WorkerSourceTask{id=oracle-logminer-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424)

santoshchanda01 commented 4 years ago

Do we need to set autoArchival parameter true to fetch records from archive logs? Currently, the parameter is set to False and I am unable to use old scn id.

One more question which is not related to this topic, Does this application support Oracle with TDE enabled?

santoshchanda01 commented 4 years ago

Do we need to set autoArchival parameter true to fetch records from archive logs? Currently, the parameter is set to False and I am unable to use old scn id.

One more question which is not related to this topic, Does this application support Oracle with TDE enabled?

image

Hi @erdemcer , I am sharing a screenshot image for your reference. Could you please help in this regard?

erdemcer commented 4 years ago

Hi, What is your config parameters ? is connector starting from specified SCN ?

santoshchanda01 commented 4 years ago

Hi, What is your config parameters ? is connector starting from specified SCN ?

Hi, Thanks for your reply.

Yes, I am starting with a specific SCN ID. Without the SCN ID it is working fine and there is no redo log issue as such. Please note that the autoArchival parameter is set to False. Will this have any impact on the logminer?

Your inputs on this would really help. Thank you.

Also one generic question on TDE? Will this connector work if TDE is enabled in Oracle.

erdemcer commented 4 years ago

Hi, By autoArchival if you mean log_archive_start parameter , it is deprecated from 10g version .If your database in archivelog mode , automatic archiving is started .

Specific SCN declaration tries to find related archivelog to mine . If these archive logs are not exists in location , connector can give your error. Please be sure about that.

Actually , i have not tested TDE . But as i know logminer which is used in this connector supports TDE. Thanks

santoshchanda01 commented 4 years ago

Hi, Thanks for your inputs. I am testing it with TDE enabled and tables having xml data. I am facing some issues in extracting the xml data. I'll share the solution once I find it.

Thank you.