tudoutiao / h2database

Automatically exported from code.google.com/p/h2database
0 stars 0 forks source link

Out Of Memory when large transaction causes Undo log switching #161

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
(simple SQL scripts or simple standalone applications are preferred)
1. insert 50,000,000 tuples into table
2. execute 'delete table where (always true)'
3. wait and see.

What is the expected output? What do you see instead?
* expected: empty table
* result: OOM exception

What version of the product are you using? On what operating system, file
system, and virtual machine?
* OS: Mac OS X 10.6
* H2: 1.2.127 (-XMX256M, embeded)

Do you know a workaround?
* No
How important/urgent is the problem for you?
* urgent 
In your view, is this a defect or a feature request?
* defect
Please provide any additional information below.
* my english is too bad so i just attach one picture to explain what's happening

Original issue reported on code.google.com by nobo...@gmail.com on 1 Feb 2010 at 7:47

GoogleCodeExporter commented 8 years ago
This is a known problem. Transactions always could run out of memory, but it got
worse than it used to be. The only workaround currently is to use smaller
transactions, or (in your case) use TRUNCATE instead of DELETE.

I will try to fix it, but I'm not sure when exactly.

Original comment by thomas.t...@gmail.com on 6 Feb 2010 at 11:51

GoogleCodeExporter commented 8 years ago
This problem should be fixed in version 1.2.130, or at least it should now not 
run
out of memory so quickly. Could you try to check if it's still a problem?

Original comment by thomas.t...@gmail.com on 26 Feb 2010 at 4:33

GoogleCodeExporter commented 8 years ago
in my case. I have to use "DELETE" instead of "TRUNCATE".
because I'm not truncating Table. this is half of the data.
I know this is not best practice. anyway i have to do.

I just re-runned test case. and same thing happen.

Original comment by nobo...@gmail.com on 2 Mar 2010 at 6:46

GoogleCodeExporter commented 8 years ago

Original comment by thomas.t...@gmail.com on 2 Mar 2010 at 8:00

GoogleCodeExporter commented 8 years ago
As I wrote, it should not use that much memory any longer as it used to. However
eventually it will still run out of memory (with 50 million rows at least). 
Also, the
problem still exists when using MVCC. Do you use MVCC?

Original comment by thomas.t...@gmail.com on 5 Mar 2010 at 8:01

GoogleCodeExporter commented 8 years ago
*   private static String DEFAULT_URL = "jdbc:h2:/Users/nobocop/local/var/lib/h2";

No, I'm not using MVCC. 

Original comment by nobo...@gmail.com on 5 Mar 2010 at 8:15

GoogleCodeExporter commented 8 years ago
I will try to solve this problem, however I'm not sure yet when I will have 
time to
do that. As a workaround, I suggest to delete the table in multiple batches if 
that
is possible.

Original comment by thomas.t...@gmail.com on 5 Mar 2010 at 8:38

GoogleCodeExporter commented 8 years ago

Original comment by thomas.t...@gmail.com on 21 Mar 2010 at 11:43

GoogleCodeExporter commented 8 years ago
This problem should be solved in version 1.2.137 (2010-06-06) - see change log 
at
http://www.h2database.com/html/changelog.html "Experimental feature to support 
very
large transactions (except when using MVCC). To enable, set the system property
h2.largeTransactions to true. If enabled, changes to tables without a primary 
key can
be buffered to disk. The plan is to enable this feature by default in version 
1.3.x."

Could you please test if this works for you, and add a comment to this bug with 
your
results? If required, I will then reopen the bug.

Original comment by thomas.t...@gmail.com on 7 Jun 2010 at 4:22