Closed veselov closed 6 years ago
No testcase. Insufficient info to define what you are doing. Some call to set the transaction isolation, and ? such things work with the default (DBCP2) pool, so only assumption, in the absence of more info, is that the Tomcat code is buggy, but then who knows. The onus is on people to demonstrate things, and if they don't want to then they need to do the debugging and contribution of possible fixes
Your theory about the tomcat validation causing the transaction not to be in IDLE is interesting: If you temporarily turn off the tomcat validation (i.e. remove the option from the JNDI config in context.xml) does the problem still occur?
@chrisco484 I don't use context.xml to configure my JNDI. But yes, maybe I'll try this at some point. But that doesn't really matter, does it? It's a generic question about JPA, connection pools and expectations. Disabling the validation would not be an acceptable work-around.
@andyjefferson That's why I asked - what is the expected behavior? Since your code sits directly at that interface, I assumed that you know what standards it falls under. Is the expected behavior that the connection pool returns the idle transaction? If yes, is it specified somewhere? So that gives me ammunition to go after Tomcat, otherwise, honestly, I'm gonna get the same response - "not our problem, go away, it works with other software". If the standard is mute on the matter, do you think that Datanucleus should just blow chunks if it encounters a connection pool that exhibits such behavior? And refusing to do anything about it because you need to test case - well, what kind of a test case is this going to be? A database pool that purposefully changes transaction isolation on a returned connection? Without specification, that can be dismissed. Or setting up Apache, the JPA, specific pool configuration, and a web app? What if they are doing it wrong? Now, if I had the answer about the specification, then the test case would be two lines of code that either validates that the Apache connection pool returns transaction in proper isolation mode, or that Datanucleus code don't meddle with isolation, or reset the transaction, because the pool does not guarantee isolation value. By the way, the statement that Datanucleus is "working with DBCP2" can with the same argumentary mean that there is a bug in DBCP2.
My reason to digging into Datanucleus - I was fed up with another JPA provider for creating nonsensical queries over complex JPQL. So, I just replaced it with Datanucleus. That caused me to observe 103 in datanucleus-api-jpa, and this issue, that's before any of the schema evaluation or queries could be tested.
Look, I understand that you can dismiss of my ramblings here. But it is a fact that it ended up being easier for me (because I wasn't at that point sure how much more incompatibility I'm going to run into), to go fix up sub-query support in this other JPA implementation, rather than continue to hack at my web application and Apache configuration.
I don't see how there is any "expected" behaviour defined for a JPA provider. What specification do you refer to? It's not in the JPA spec, which has only 4 mentions of a datastore connection, and nothing related to pooling of connections, and certainly nothing about even being able to set the transaction isolation.
The JDBC spec has a java.sql.Connection
object and its javadocs say that a call to setTransactionIsolation()
is Note: If this method is called during a transaction, the result is implementation-defined.. So the only interpretation I have is that nobody is contravening anything, and the Tomcat pooling people are simply choosing to not implement that method, or only allow it under some circumstances.
Only you know when that connection was obtained, and at what point this exception came up. Was it after getting the connection from the (tomcat) pool, and so is presumably ready to be configured? I see no obvious reason why a connection cannot have its isolation set when being used for the first time, but again, the implementor of the pool is able to allow or disallow that method if they so wish.
Testcases are defined by http://datanucleus.org/documentation/problem_reporting.html and while that may not be adequate to fully reproduce something, it would be adequate to show more or less where things are happening, particularly with reference to logs (which show when a connection comes from a pool).
Nobody has "dismissed" your "ramblings", simply that you provide inadequate info to comment accurately, and since very few people have any time to spend here (the usual open source thing of only maybe 1% of people using the software actually contribute anything) then they are definitely not going to spend it on an inadequate definition (and this project has a policy of not keeping hundreds of ill-defined "problem reports" that people have to wade through just to find a clear definition of what needs working). Nobody here gains from your usage of this software unless you put something back into it, so you are perfectly at liberty to go off and use something else. Be happy
The Tomcat connection pool was a contrib years ago (see the header of that file).
If you look at the javadocs for Tomcat's "PoolProperties" there is a method setDefaultTransactionIsolation
. Perhaps, if you are just setting the isolation to a single value for an EMF, then you should get the code for that class and add a call to setDefaultTransactionIsolation
. See the HikariCP variant that I wrote a long time ago that does something similar. If that works for you then you can contribute it as a PullRequest
@veselov I wasn't suggesting disabling the validation as a work around - was just suggesting it may help with your diagnosis of the issue.
My company and companies we contract to have been using DataNucleus successfully on very large, very OO data models (which we found virtually impossible to do efficiently or productively with another ORM) for many years. Although we use JDO not JPA the DN core and DN rdbms are excellently abstracted away from any particular ORM standard so both standards leverage a common, rock solid, high performance underlying foundation.
Although we have used DataNucleus for many years it's only recently (better late than never 👍 ) that I've taken to "giving back" with some recent code contributions and would encourage you to join in the DataNucleus community and do the same.
The DataNucleus community is full of opinionated, "free thinkers" - but that's obvious if you think about it - people who aren't "free thinkers" are usually just sheeple who seem to happy enough to follow the other Lemmings over the cliff while using "another" JPA implementation =)
@andyjefferson I'm not setting transaction isolation, Datanucleus does. Without me intervening in the slightest. Though I understand it's probably needed for the functionality to work. The isolation value may be controllable, but I haven't meddled with any relevant configuration.
if (dba.supportsTransactionIsolation(reqdIsolationLevel))
{
int currentIsolationLevel = cnx.getTransactionIsolation();
if (currentIsolationLevel != reqdIsolationLevel)
{
cnx.setTransactionIsolation(reqdIsolationLevel);
}
}
Tomcat pooling connection object simply forwards the setTransactionIsolation()
call to underlying PG connection. PG connection is blowing chunks because it can't change isolation in the middle of
a transaction. It is allowed to do so. That part is absolutely fine.
The API, by which Datanucleus acquires the connection is governed by [DataSource.getConnection()](https://docs.oracle.com/javase/7/docs/api/javax/sql/DataSource.html#getConnection()) The documentation is completely mute on whether the returned connection object is in any particular state. Which means (IMHO) that it is fully OK if the returned connection is in the middle of a transaction.
I don't think there is any benefit (but this is the part where I can be completely off) for Datanucleus to hold on to any existing connection state. Especially, since there is no promise about its state. So, if the connection is just reset after acquisition, it will suit everybody. After all, the code defends itself to enforce the auto-commit, and transaction isolation for that matter.
diff --git a/src/main/java/org/datanucleus/store/rdbms/ConnectionFactoryImpl.java b/src/main/java/org/datanucleus/store/rdbms/ConnectionFactoryImpl.java
index 06b05e93..1a7f16d2 100644
--- a/src/main/java/org/datanucleus/store/rdbms/ConnectionFactoryImpl.java
+++ b/src/main/java/org/datanucleus/store/rdbms/ConnectionFactoryImpl.java
@@ -439,6 +439,10 @@ public class ConnectionFactoryImpl extends AbstractConnectionFactory
{
cnx.setAutoCommit(false);
}
+ else
+ {
+ cnx.rollback();
+ }
if (dba.supportsTransactionIsolation(reqdIsolationLevel))
{
I'll provide a test case through a test-jpa project, and post it here in some way.
The test is at https://github.com/veselov/test-jpa/tree/datanucleus-rdbms-274. It succeeds if the "select 1" query is removed in SimpleCF. Requires a Postgres database (uses datanucleus db, username and password by default).
I wouldn't use the Tomcat pool wrapper, maintaining it requires constant chasing after Tomcat features, and different tomcat versions use different pools by default. Newer (tomcat versions) don't use that pool, they use DBCP2, so if somebody keeps using this wrapper, but advances in Tomcat versions, they can run into using deprecated functionality. It's always healthier to create the data source using Tomcat itself, register it in JNDI, and have JPA use it directly.
Quick inspection of your "test" says that you are using an external DataSource, but no separate DataSource for schema/sequence handling. If using an external DataSource you need a separate schema (non-JTA) DataSource. Otherwise it will try to do schema things in between persistence operations and databases like PostgreSQL will never accept DDL going down the same Connection as SQL. The more normal way of doing things is to create your schema + tables up front before running persistence.
Man, this is just a test. It shows what happens if a datasource that is used for schema generation returns connections the way it does. If I have 2 data sources, nothing different would happen. The datasource I return in the test is non-JTA, by the way. I don't use JTA data sources in my application anyway.
Forget about any of this, and just look at the core of this problem: Datanucleus retrieves connections from data sources and does not sanitize them despite no contract promises that data sources ever returns pristine connections. Can you just tell me whether you agree to just this statement, and if you disagree, then why, and if you do, then what's wrong with applying the fix above?
Postgres will accept DDL along SQL just fine, sorry to burst your bubble there.
Connections are taken from connection factory 1 for persistence, and factory 2 for schema ops. They need to be separated. I comment on the symptoms of what I see (since there is still nothing provided that represents a real situation).
Yes, DataNucleus doesn't "sanitise" anything, but then it shouldn't have to. A pool is to return a connection ready for use, so we can perform its ops on it, and then COMMIT it. There is no requirement to do more. All your test does is take some hypothetical situation of "here is a connection, and I frig with it before handing it out".
If you want a "sanitised" connection then you can easily CONTRIBUTE it as an option that people can turn on. I've no interest in using it personally.
You haven't approached the situation of what has played about with your connection when a schema op is attempted on it (in the original trace). The DN LOG tells you what DN is doing with it.
I know well enough about PostgreSQL and using SQL + DDL + SQL etc before a COMMIT. Nobody's "bubble" is "burst".
"Man", I spent my time looking at your test and gave some thoughts. Since you adopt that tone, I'll spend my time on other things. Good luck.
I'm not sure I understand the tone comment. This conversation has animosity, yes, but it's mutual, and I'm trying to come with you with facts. I don't understand the factory comment still. I'm not making the JPA use the same connections for schema and persistence operations. Where does it say in the JPA specification that there must be 2 data sources? Why would JPA mix schema and persistence operations together, if there is a separate connection that is retrieved every time (from one data source)? The connections are fully isolated, whether they come from one data source or another.
About DataNucleus not sanitizing anything and not having to - you are not providing any basis to that statement - that it doesn't have to. I've pointed out the relevant documentation of the data source, and you are just ignoring it, because it, in your opinion, backed by limited examples of the frameworks you have so far been working with, there is more promised for you than the specification says.
About postgres:
datanucleus=# create table hello ( i integer );
CREATE TABLE
datanucleus=# begin work
datanucleus-# ;
BEGIN
datanucleus=# insert into hello (i) values (2);
INSERT 0 1
datanucleus=# create table hello2 ( i integer );
CREATE TABLE
datanucleus=# insert into hello2 ( i ) values (1 );
INSERT 0 1
datanucleus=# drop table hello2;
DROP TABLE
datanucleus=# commit;
COMMIT
datanucleus=#
I'm having this problem that when using Tomcat connection pool over Postgres, the set up fails with the exception from postgres complaining that there was an attempt to change transaction isolation in the middle of transaction. The code that checks for schema structure requires transaction isolation level of 8, and default is 2. The connection that is returned from the Tomcat pool is not in Transaction.IDLE state. Probably because Tomcat ran a validation query on it, and hasn't reset the transaction since. Should JPA code reset the transaction? Should the connection pool return clean connections only?
Full exception, for the record: https://pastebin.com/3uCWXif0