Open GoogleCodeExporter opened 9 years ago
Original comment by robert.h...@continuent.com
on 5 Nov 2012 at 1:42
Original comment by linas.vi...@continuent.com
on 15 Jan 2013 at 4:41
Original comment by jeff.m...@continuent.com
on 21 Feb 2013 at 7:54
Original comment by jeff.m...@continuent.com
on 21 Feb 2013 at 7:56
Original comment by jeff.m...@continuent.com
on 21 Feb 2013 at 7:56
Original comment by robert.h...@continuent.com
on 18 Mar 2013 at 6:21
We'll use 2.1.0 instead of 2.0.8, hence moving the issues.
Original comment by linas.vi...@continuent.com
on 27 Mar 2013 at 3:13
This is not an easy fix. We can't fragment *individual* statement or row
update. We might need an extension of the record format: if object is bigger
than X, then we'd store it externally. It would work similar to LOAD DATA
INFILE.
Original comment by linas.vi...@continuent.com
on 4 Jul 2013 at 2:38
problem still persists with build 2.1.1-56.
The slave replicator dies while trying to apply the record.
Partial workaround: if we double the replicator heap memory
Original comment by g.maxia
on 4 Jul 2013 at 4:25
Do you feel an estimate how much more heap does it need for X MB record? Is it
linear?
Original comment by linas.vi...@continuent.com
on 4 Jul 2013 at 7:15
If I use a bigger record, the replicator fails in different ways. We need to
nail down this one (meaning we understand why it is failing) before we make an
attempt at scaling the issue to bigger values
Original comment by g.maxia
on 4 Jul 2013 at 9:12
Original comment by linas.vi...@continuent.com
on 26 Aug 2013 at 1:54
There won't be a 2.1.3.
Original comment by linas.vi...@continuent.com
on 17 Sep 2013 at 10:13
This is a big change and need to be scheduled for a release when we can really
fix it. One fix is to change the replicator log format to segment large
transactions into separate messages similar to the way that LOAD DATA
statements are handled when pulling data from MySQL.
Original comment by robert.h...@continuent.com
on 11 Dec 2013 at 4:35
How to reproduce using a Tungsten-Sandbox:
MASTER=~/tsb3/db_n1
$MASTER -e "set global binlog_format=row"
~/tsb3/db_use_all "set global max_allowed_packet=200*1024*1024"
~/tsb3/db_use_all "select format(@@max_allowed_packet,0) as max_allowed_packet"
$MASTER -e "create schema if not exists test"
$MASTER -e "drop table if exists test.test4g"
$MASTER -e "drop table if exists test.innodb_lock_monitor"
$MASTER -e "CREATE TABLE test.innodb_lock_monitor(a int) ENGINE=INNODB;"
$MASTER -e "create table test.test4g(id int not null auto_increment primary
key, t longtext, TS timestamp)"
$MASTER -e "insert into test.test4g(t) values (repeat( 'a', 64*1024*1024) )"
resulting status:
NAME VALUE
---- -----
appliedLastEventId : NONE
appliedLastSeqno : -1
appliedLatency : -1.0
autoRecoveryEnabled : false
autoRecoveryTotal : 0
channels : -1
clusterName : tsandbox
currentEventId : NONE
currentTimeMillis : 1407250499626
dataServerHost : gmini
extensions :
host : gmini
latestEpochNumber : -1
masterConnectUri : thl://gmini:12110/
masterListenUri : thl://gmini:12120/
maximumStoredSeqNo : -1
minimumStoredSeqNo : -1
offlineRequests : NONE
pendingError : Event extraction failed
pendingErrorCode : NONE
pendingErrorEventId : NONE
pendingErrorSeqno : -1
pendingExceptionMessage: Connector handler terminated by THL exception: Unable
to deserialize event
pipelineSource : UNKNOWN
relativeLatency : -1.0
resourcePrecedence : 99
rmiPort : 10120
role : slave
seqnoType : java.lang.Long
serviceName : tsandbox
serviceType : unknown
simpleServiceName : tsandbox
siteName : default
sourceId : gmini
state : OFFLINE:ERROR
timeInStateSeconds : 189.316
transitioningTo :
uptimeSeconds : 861.175
useSSLConnection : false
version : Tungsten Replicator 3.0.0 build 215
Finished status command...
Original comment by g.maxia
on 5 Aug 2014 at 2:57
Stack trace of the previous example:
INFO | jvm 1 | 2014/08/05 21:51:50 | 2014-08-05 21:51:50,278 [tsandbox -
remote-to-thl-0] INFO pipeline.SingleThreadStageTask Last successfully
processed event prior to termination: seqno=5
eventid=mysql-bin.000002:0000000000001047;53
INFO | jvm 1 | 2014/08/05 21:51:50 | 2014-08-05 21:51:50,278 [tsandbox -
remote-to-thl-0] INFO pipeline.SingleThreadStageTask Task event count: 6
INFO | jvm 1 | 2014/08/05 21:51:50 | 2014-08-05 21:51:50,278 [tsandbox -
pool-2-thread-1] ERROR management.OpenReplicatorManager Received error
notification, shutting down services :
INFO | jvm 1 | 2014/08/05 21:51:50 | Event extraction failed
INFO | jvm 1 | 2014/08/05 21:51:50 |
com.continuent.tungsten.replicator.extractor.ExtractorException: Connector
handler terminated by THL exception: Unable to deserialize event
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.thl.RemoteTHLExtractor.extract(RemoteTHLExtra
ctor.java:304)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.thl.RemoteTHLExtractor.extract(RemoteTHLExtra
ctor.java:60)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.pipeline.SingleThreadStageTask.runTask(Single
ThreadStageTask.java:252)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.pipeline.SingleThreadStageTask.run(SingleThre
adStageTask.java:179)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
java.lang.Thread.run(Thread.java:695)
INFO | jvm 1 | 2014/08/05 21:51:50 | Caused by:
com.continuent.tungsten.replicator.thl.THLException: Connector handler
terminated by THL exception: Unable to deserialize event
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.thl.Protocol.requestReplEvent(Protocol.java:3
75)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.thl.Connector.requestEvent(Connector.java:175
)
INFO | jvm 1 | 2014/08/05 21:51:50 | at
com.continuent.tungsten.replicator.thl.RemoteTHLExtractor.extract(RemoteTHLExtra
ctor.java:237)
INFO | jvm 1 | 2014/08/05 21:51:50 | ... 4 more
INFO | jvm 1 | 2014/08/05 21:51:50 | 2014-08-05 21:51:50,281 [tsandbox -
pool-2-thread-1] WARN management.OpenReplicatorManager Performing emergency
service shutdown
Original comment by g.maxia
on 5 Aug 2014 at 3:00
Original issue reported on code.google.com by
g.maxia
on 20 Oct 2012 at 2:10