bigcy / tungsten-replicator

Automatically exported from code.google.com/p/tungsten-replicator
0 stars 0 forks source link

Prefetch gets OutOfMemoryException when converted statement query fetches large number of rows #319

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?

1. Set up master/slave replication with prefetch enabled on the slave.   Set 
allowAll=true to ensure all queries 
are prefetched, regardless of actual slave lag. 
2. Use sysbench to add a 10M row table to a test database.
3. Issue an UPDATE that touches all rows, such as UPDATE sbtest set k2=k2+1 
where k2 > 0;

What is the expected output?

Prefetch should fetch rows from sbtest and then fetch secondary indexes. 

What do you see instead?

Replicator that implements prefetch typically gets an OutOfMemoryException.  

What is the possible cause?

The drizzle driver returns all rows in memory.  This blows up memory.  

What is the proposed solution?

Add a LIMIT clause to each converted select statement.  The example above would 
convert to SELECT * from sbtest where k2 > 0 LIMIT nnnnn, where the value of 
nnnnn is from property prefetchRowLimit, which is set as follows in the 
static-svc.properties file. 

# Maximum number of rows to return when transforming a statement to a query. 
# Higher values can cause the replicator to run out of memory.  Lower values
# reduce prefetching on operations that affect large numbers of rows.  This 
# value is a compromise.  0 removes limits. 
replicator.applier.dbms.prefetchRowLimit=25000

Additional information

...

Use labels and text to provide additional information.

Original issue reported on code.google.com by robert.h...@continuent.com on 22 Mar 2012 at 6:17

GoogleCodeExporter commented 9 years ago

Original comment by robert.h...@continuent.com on 19 Sep 2012 at 2:19

GoogleCodeExporter commented 9 years ago

Original comment by robert.h...@continuent.com on 19 Sep 2012 at 2:27

GoogleCodeExporter commented 9 years ago

Original comment by linas.vi...@continuent.com on 15 Jan 2013 at 4:41

GoogleCodeExporter commented 9 years ago
This is pushed out until we get a product usage problem.   For now we recommend 
increasing memory. 

Original comment by robert.h...@continuent.com on 18 Mar 2013 at 6:20

GoogleCodeExporter commented 9 years ago
We'll use 2.1.0 instead of 2.0.8, hence moving the issues.

Original comment by linas.vi...@continuent.com on 27 Mar 2013 at 3:11

GoogleCodeExporter commented 9 years ago

Original comment by linas.vi...@continuent.com on 26 Aug 2013 at 1:54

GoogleCodeExporter commented 9 years ago
There won't be a 2.1.3.

Original comment by linas.vi...@continuent.com on 17 Sep 2013 at 10:13

GoogleCodeExporter commented 9 years ago

Original comment by robert.h...@continuent.com on 11 Dec 2013 at 4:11