Open simao opened 9 years ago
I have the same problem with LocalDateTime column (timestamp in Postgres)
Same problem with LocalDateTime
Out of curiosity, what version of Slick, slick-pg and Postgres JDBC are you guys using? I'm only asking because I had pretty much the same problem a while back when I upgraded my version of the JDBC driver from 9.4-1201-jdbc41
to 9.4-1203-jdbc42
, while using Slick 3.0.1
and slick-pg 0.9.2
. I didn't have time to fully investigate (I just rolled back to the older JDBC driver which fixed it), but from the limited time I spent debugging it, it seemed like that the actual JDBC driver itself was returning TIMESTAMP WITH TIMEZONE
columns without the actual Time Zone at the end...
I haven't looked at this since, but I tried to reproduce it just now with Slick 3.1.0
and slick-pg 0.10.0
, upgrading the Postgres driver to the latest version (9.4-1204-jdbc42
), and I don't seem to get the error any more, so maybe whatever was causing it has been fixed.
Scratch that - it does seem to be broken still. Everything works fine with Postgres driver version 9.4-1201-jdbc41
, but using version 9.4-1202-*
and above results in an error. Here's an example stack trace:
DateTimeParseException: Text '2015-07-24 16:14:13.0' could not be parsed at index 21
at java.lang.Thread.run
at java.util.concurrent.ThreadPoolExecutor$Worker.run
at java.util.concurrent.ThreadPoolExecutor.runWorker
at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.run
at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.liftedTree1$1
at slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1.run
at slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1.run
at slick.jdbc.StreamingInvokerAction$class.run
at slick.jdbc.StatementInvoker.foreach
at slick.jdbc.Invoker$class.foreach
at slick.jdbc.PositionedResultIterator.foreach
at scala.collection.Iterator$class.foreach
at slick.jdbc.PositionedResultIterator.hasNext
at slick.util.ReadAheadIterator$class.hasNext
at slick.util.ReadAheadIterator$class.slick$util$ReadAheadIterator$$update
at slick.jdbc.PositionedResultIterator.fetchNext
at slick.jdbc.StatementInvoker$$anon$1.extractValue
at slick.driver.JdbcInvokerComponent$QueryInvokerImpl.extractValue
at slick.relational.TypeMappingResultConverter.read
at slick.relational.ProductResultConverter.read
at slick.relational.ProductResultConverter.read
at slick.jdbc.SpecializedJdbcResultConverter$$anon$1.read
at slick.jdbc.SpecializedJdbcResultConverter$$anon$1.read
at com.github.tminglei.slickpg.utils.PgCommonJdbcTypes$GenericJdbcType.getValue
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$fromInfinitable$1.apply
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$fromInfinitable$1.apply
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$4.apply
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$4.apply
at java.time.ZonedDateTime.parse
at java.time.format.DateTimeFormatter.parse
at java.time.format.DateTimeFormatter.parseResolved0
I had a quick look at the issues for the Postgres driver on GitHub (https://github.com/pgjdbc/pgjdbc) and didn't immediately see anything related, so maybe this is somehow expected behavior? I don't have much experience with using JDBC directly, so I can't tell...
@tminglei - what do you think?
Hey guys, I just upgraded posgres driver to 9.4-1204
and did some adjustments. Changes' here.
Yes, I encountered some problems, but it's not related to this issue.
I haven't reproduced the problem, maybe you guys can paste some reproducable codes?
I tried 9.4-1202-jdbc41
, 9.4-1202-jdbc42
, 9.4-1203-jdbc41
, 9.4-1203-jdbc42
, 9.4-1204-jdbc41
and 9.4-1204-jdbc42
.
Hi,
I will try to write a failing test during the weekend, see if I can isolate it.
Thanks
On 17 October 2015 11:44:48 CEST, "涂名雷" notifications@github.com wrote:
Hey guys, I just upgraded posgres driver to
9.4-1204
. Changes' here. Yes, I encountered some problems, but it's not related to this issue.I haven't reproduced the problem, maybe you guys can paste some usable codes?
Reply to this email directly or view it on GitHub: https://github.com/tminglei/slick-pg/issues/220#issuecomment-148899889
Hi.
I could not isolate this in a standalone project but I managed to write a test on my project where this happens.
I have the following code:
val f = for {
_ <- db.run(schema.vehicles += testVehicle)
_ ← db.run(schema.vehicles.take(100).result)
_ ← db.run(schema.vehicles.take(100).result)
_ ← db.run(schema.vehicles.take(100).result)
_ ← db.run(schema.vehicles.take(100).result)
_ ← db.run(schema.vehicles.take(100).result)
_ ← db.run(schema.vehicles.take(100).result)
} yield ()
whenReady(f) { _ =>
}
If I remove a single one of those db.run(...result) then the test will be green.
I don't do any special setup, just create some tables and this test runs alone, no other tests are running before/after/during.
This is really weird.
Thank you
@motns I have this error with both 9.4-1201-jdbc41 and 9.4-1202-*and above
Can you guys provide reproduce codes? I can't reproduce it yet.
Hi,
I have the same error since the 0.10.1 release. The code is very simple, I try to fetch an user with a 'createdAt' field (TIMESTAMPTZ) in a Postgresql database. The first fetch is good, the "user.createdAt" equals to "2015-10-20 14:43:45.602+02" but the next, contains only the datetime without timezone "2015-10-20 14:43:45.602".
I have this code:
Schemas.users.filter(_.id === userInfo.userId).result.headOption
And the error:
java.lang.IllegalArgumentException: Invalid format: "2015-10-20 14:43:45.602" is too short at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:899) ~[joda-time-2.8.2.jar:2.8.2] at org.joda.time.DateTime.parse(DateTime.java:160) ~[joda-time-2.8.2.jar:2.8.2] at com.github.tminglei.slickpg.PgDateSupportJoda$JodaDateTimeImplicits$$anonfun$8.apply(PgDateSupportJoda.scala:40) ~[slick-pg_joda-time_2.11-0.10.1.jar:0.10.1] at com.github.tminglei.slickpg.PgDateSupportJoda$JodaDateTimeImplicits$$anonfun$8.apply(PgDateSupportJoda.scala:40) ~[slick-pg_joda-time_2.11-0.10.1.jar:0.10.1] at
Thanks a lot
For me, this occurs at random it seems (Postgres 9.4.5, 9.4-1204-jdbc41, slick-pg 0.10.1). I don't know if this is related to slick-pg, as it gets fed a date time that is too short (it's missing the date time offset part like mentioned previously). The weird thing is that when I just start my Play app, it works a couple of times, after that it stops working and I get each time the error described above. I'll see if I can get an app out there that demonstrates this problem.
I tried the following:
override def all: Future[Seq[User]] = {
db.run(sql"select USERS.birth_date from USERS".as[(String)]).flatMap { birthdates =>
println(birthdates)
db.run(allUsers)
}
}
When I get the error, the println
statement outputs something like Vector(2015-10-29 22:56:02.719)
, does this mean that the problem is actually in slick itself?
I see the same problem. slick-pg 0.10.0, postgresql driver 9.4-1203-jdbc42.
In codes of slick-pg
date2 support, it first get string from result set, then parse the string value to LocalDate/ZonedDateTime
.
I debugged into postgres driver, and here's the related codes of getString(columnIndex)
:
// in AbstractJdbc2ResultSet.java
public String getString(int columnIndex) throws SQLException
{
checkResultSet( columnIndex );
if (wasNullFlag)
return null;
// varchar in binary is same as text, other binary fields are converted to their text format
if (isBinary(columnIndex) && getSQLType(columnIndex) != Types.VARCHAR) {
Object obj = internalGetObject(columnIndex, fields[columnIndex - 1]);
if (obj == null) {
return null;
}
// hack to be compatible with text protocol
if (obj instanceof java.util.Date) {
return connection.getTimestampUtils().timeToString((java.util.Date) obj);
}
if ("hstore".equals(getPGType(columnIndex))) {
return HStoreConverter.toString((Map) obj);
}
return trimString(columnIndex, obj.toString());
}
Encoding encoding = connection.getEncoding();
try
{
return trimString(columnIndex, encoding.decode(this_row[columnIndex - 1]));
}
catch (IOException ioe)
{
throw new PSQLException(GT.tr("Invalid character data was found. This is most likely caused by stored data containing characters that are invalid for the character set the database was created in. The most common example of this is storing 8bit data in a SQL_ASCII database."), PSQLState.DATA_ERROR, ioe);
}
}
Note: for timestamptz
, the return value of getSQLType(columnIndex)
is Types.TIMESTAMP
, and .
So if the column's format is binary
, it will get into the if
branch, call internalGetObject(columnIndex, fields[columnIndex - 1])
, and get a Timestamp
object, then ... the zone info was discarded.
But the column format is always text
in my test cases. So I didn't reproduce it.
So you are able to reproduce it now? In my case the field is TIMESTAMP WITH TIME ZONE
. I debugged in the PgCommonJdbcTypes$GenericJdbcType#getValue
method, there the parsed string does not contain the offset. But it is still unclear to me why on server startup the field there does get parsed properly (i.e., with an offset).
I have a REST endpoint that produces exactly the same SQL request on each call. Problem happens only when I repeat this call several times on busy server (apache benchmark is a good tool for it).
Single requests in non-busy development environment usually works fine.
@damirv, I haven't reproduced it yet. I just found a possible cause.
I've got a bit more information:
In my project(slick 3.0.3, slick-pg 0.9.0, postgresql 9.4-1201-jdbc41) I used https://github.com/tototoshi/slick-joda-mapper for joda DateTime. After switching to slick-pg's implicits tests started to throw this error.
@prettynatty, I can't still reproduce it, with your tips.
I still can't localize it to a small project.
Hello,
I can confirm, that this bug also occurs with ZonedDateTime and correct data sets in the Postgresql database.
We use: postgresql 9.4-1205-jdbc42, slick-pg 0.10.1, slick-pg_date 0.10.1, slick-pg_argonaut 0.10.1 and play-slick 1.1.1
.
In the Postgresql table the following timestamp with time zone is stored: 2015-11-10 15:51:08.497+01
We get the following error:
2015-11-18 14:01:10,815 [error] a.a.OneForOneStrategy - Text '2015-11-10 15:51:08.497' could not be parsed at index 23
java.time.format.DateTimeParseException: Text '2015-11-10 15:51:08.497' could not be parsed at index 23
at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1947) ~[na:1.8.0_45-internal]
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1849) ~[na:1.8.0_45-internal]
at java.time.ZonedDateTime.parse(ZonedDateTime.java:597) ~[na:1.8.0_45-internal]
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$4.apply(PgDate2Support.scala:78) ~[slick-pg_date2_2.11-0.10.1.jar:0.10.1]
at com.github.tminglei.slickpg.PgDate2Support$Date2DateTimeFormatters$$anonfun$4.apply(PgDate2Support.scala:78) ~[slick-pg_date2_2.11-0.10.1.jar:0.10.1]
The time zone is not delivered to PgDate2Support
for parsing the value as ZonedDateTime
Would it be a solution to set the system time zone, if no time zone is available?
We tested also with the following postgresql versions: 9.4-1205-jdbc42 - 9.4-1200-jdbc42
@faustas, can you always reproduce it?
If yes, can you debug into AbstractJdbc2ResultSet.getString(int columnIndex)
? I want to know whether isBinary(columnIndex)
return true
or false
.
BTW, I can't reproduce it on my local.
Hello,
I debugged the application and can tell you the following.
Within AbstractJdbc2ResultSet.getString(int columnIndex)
, the isBinary(columnIndex)
is true.
The following code Object obj = internalGetObject(columnIndex, fields[columnIndex - 1]);
returns the Timestamp without the Timezone.
As far as I understand the problem:
getTimestamp
within internalGetObject
already calculates the Timezone into the return value. Therefore, the time string does not need the Timezone anymore.The final problem is the type conversion into ZonedDateTime which expects a Timezone at the parsed string.
The Postgresql guys seem to have some problems in returning Timestamp with / without timezone correctly. See:
It would be great, if slick-pg could provide a workaround and select the values correctly.
Thanks @faustas, you confirmed my guess!
But I haven't a good solution to resolve it yet.
@tminglei Would it be possible to change the fetching logic for timestamp related values? I'm thinking in the direction of pulling them out as formatted datetime strings and parsing them afterwards. So far writing these values seems to work, therefore it should suffice to change the "read logic".
I currently have no idea how to implement that and I know that this is a "hack" but it would maybe provide a useable workaround as long as the postgresql guys haven't fixed that.
@jan0sch, this is what I did in slick-pg
.
But the returned value true
of isBinary(columnIndex)
in AbstractJdbc2ResultSet.getString(int columnIndex)
caused pgjdbc
to run into a code branch, where it read a timestamp object, then covert it to string, and time zone info was lost in the process.
See the AbstractJdbc2ResultSet.getString(int columnIndex)
codes from above comments of mine.
So, as pgjdbc/pgjdbc#373 and pgjdbc/pgjdbc#304 pointed out, the root cause is postgres sent back binary (float or long) w/o time zone instead time string to jdbc driver.
Ah, okay. So the problem is that the column is a binary one. Which means we would have to wait for a fix from the postgresql side or store timstamp values as strings and that would be rather ugly. :-(
This is just a quickshot but would something like this maybe provide a workaround until this is officially fixed?
diff --git a/org/postgresql/jdbc2/AbstractJdbc2ResultSet.java b/org/postgresql/jdbc2/AbstractJdbc2ResultS
index 64d949f..efbc370 100644
--- a/org/postgresql/jdbc2/AbstractJdbc2ResultSet.java
+++ b/org/postgresql/jdbc2/AbstractJdbc2ResultSet.java
@@ -2986,7 +2986,14 @@ public abstract class AbstractJdbc2ResultSet implements BaseResultSet, org.postg
* @return True if the column is in binary format.
*/
protected boolean isBinary(int column) {
- return fields[column - 1].getFormat() == Field.BINARY_FORMAT;
+ Field f = fields[column - 1];
+ if (f.getPGType().equalsIgnoreCase("timestamptz") || f.getPGType().equalsIgnoreCase("timetz")) {
+ // Prevent binary treating of fields with timezone.
+ return false;
+ }
+ else {
+ return fields[column - 1].getFormat() == Field.BINARY_FORMAT;
+ }
}
//----------------- Formatting Methods -------------------
I've just realised that the problem roots far deeper. Forget that workaround. ;-)
Well, we also need ensure the datetime binary can be decoded to string w/ time zone here:
// in AbstractJdbc2ResultSet.java
public String getString(int columnIndex) throws SQLException
{
...
Encoding encoding = connection.getEncoding();
try
{
return trimString(columnIndex, encoding.decode(this_row[columnIndex - 1]));
}
catch (IOException ioe)
{
throw new PSQLException(GT.tr("Invalid character data was found. This is most likely caused by stored data containing characters that are invalid for the character set the database was created in. The most common example of this is storing 8bit data in a SQL_ASCII database."), PSQLState.DATA_ERROR, ioe);
}
}
pls help continue to check it. :-)
I'm short on time but I'll try to help as best as I can. I just came to realise that the whole JDBC thing has no concept for timezones on times and timestamps. But I guess fixing JDBC would take a lot and break a lot. ;-)
Im getting the same problem, and it has broken all of our code. Is there any workaround (i.e. can we downgrade the JDBC version)?
@mdedetrich use java.sql.Timestamp
@maxcom Not really an option, our entire codebase uses DateTime
, and we have a huge amount of tables to deal with. Is there any other option?
Forcefully setting the JDBC version to "org.postgresql" % "postgresql" % "9.4-1201-jdbc41"
fixed the problem for me, so at least for our case, the upgrade of the JDBC version definitely broke something
@mdedetrich, glad to hear that! :-)
I also suspected new pgjdbc
changes introduced the bug, but no suspicious codes found yet.
For us the bug did stay even if downgrading the postgresql driver version. We tried all versions from 94.-1201
to 9.4-1206
using their "jdbc4", "jdbc41" and "jdbc42" flavours.
Because we have time pressure we resolved to using java.sql.Timestamp
database columns which are mapped via implicit conversions to the java.time.ZonedDateTime
we use within the application.
Another way to let it work (no crashing), instead of fixing
it.
Override the timestamp w/ time zone type mapper
in your customized postgres driver like this:
object MyPostgresDriver extends PostgresDriver
...
with PgDateSupportJoda {
...
override val api = new API with DateTimeImplicits {
def getDateTimeFormatter(s: String) = {
if(s.matches(".*[\\+|\\-][0-9]+$")) if(s.indexOf(".") > 0 ) jodaTzDateTimeFormatter else jodaTzDateTimeFormatter_NoFraction
else if(s.indexOf(".") > 0 ) jodaDateTimeFormatter else jodaDateTimeFormatter_NoFraction
}
implicit override val jodaTimestampTZTypeMapper: GenericJdbcType[DateTime] = new GenericJdbcType[DateTime]("timestamptz",
fnFromString = (s) => DateTime.parse(s, getDateTimeFormatter(s)),
fnToString = (v) => v.toString(jodaTzDateTimeFormatter),
hasLiteralForm = false)
}
...
}
Then it will auto survive when no time zone info found, and your application codes kept unchanged.
p.s. this is for joda
, but you can easily change it for java.time
.
Thanks for the hint.
As far as I've seen from the pgjdbc driver the time zone will definitely get lost when retrieving a timestamp from the database. This shouldn't be a problem if all timestamps are saved with the same time zone but in other cases the conversion into java.sql.Timestamp
within the pgjdbc driver will lead to loss of information. :-(
In my opinion this should be fixed on the pgjdbc side because all we can do on the "outer layer" are workarounds.
Anyway, many thanks for writing this great extension to slick. :+1: :-)
workaround for java8 and OffsetDateTime (uses system default offset, this may create wrong results):
with DateTimeImplicits {
val zoneOffset = ZoneOffset.from(OffsetDateTime.now)
def string2Offset(s: String): OffsetDateTime =
if (s.matches(".*[\\+|\\-][0-9]+$")) OffsetDateTime.parse(s, date2TzDateTimeFormatter)
else OffsetDateTime.of(LocalDateTime.parse(s, date2DateTimeFormatter), zoneOffset)
def offset2String(v: OffsetDateTime): String = v.format(date2TzDateTimeFormatter)
implicit override val date2TzTimestampTypeMapper = new GenericJdbcType[OffsetDateTime]("timestamptz",
fnFromString = string2Offset,
hasLiteralForm = false,
fnToString = offset2String
)
}
@tminglei Would it be possible to downgrade the JDBC version back to a version that worked? I think for some people (I know definitely for me), thats what caused the issue.
Then maybe we should wait for the people at postgres-jdbc to fix the problem on their end
@mdedetrich, I won't mind to do it, if it can really resolve the problem. But as @jan0sch pointed in comment above, it can't.
So I'm not sure whether we should do it.
For me it did solve the problem (9.4-1201-jdbc41 and using OffsetDateTime).
Well, I think I found the cause.
There's a method in AbstractJdbc2Statement.java
like this:
public void setPrepareThreshold(int newThreshold) throws SQLException {
checkClosed();
if (newThreshold < 0) {
forceBinaryTransfers = true;
newThreshold = 1;
}
this.m_prepareThreshold = newThreshold;
}
And from postgres driver 9.4-1202-*
, it was called in AbstractJdbc2Statement
constructor.
And at the same time, CachedQuery
was used in AbstractJdbc2Statement
constructor.
The forceBinaryTransfers
will force postgres db backend to send binary back. Then based on the logic getString(columnIndex)
of AbstractJdbc2ResultSet.java
I pasted in comment above, it lead to the timestamptz data was converted to java.sql.Timestamp
then to string, and the time zone part was lost from the final string.
*So, downgrade to `9.4-1201-` can resolve the problem.**
I will do it ASAP.
Hi all, I downgrade postgres driver to 9.4-1201-jdbc41, and released slick-pg v0.10.2
, to resolve the problem temporarily.
Pls update your project dependencies to use it.
Thanks for the update! Getting something in production on tuesday and was hopping to not making the experiment with the workaround :)
Have this same problem. Downgrading the driver fixes it.
FWIW, I didn't see this issue and had the same problem. I worked around it by replacing the column type to one mapped to time stamp instead of mapped to String. I will test with the lower Postrgresql version today.
@tminglei Not sure why, but I am still getting this in the latest version of slick-pg v0.14.2, and I have cycled through the following postgres jdbc versions
"org.postgresql" % "postgresql" % "9.4-1201-jdbc41" force(),
"org.postgresql" % "postgresql" % "9.4-1206-jdbc41" force(),
"org.postgresql" % "postgresql" % "9.4.1208.jre7" force()
And I am getting an error in all of the cases. Any suggestions?
Hi,
When using joda time with slick-pg I get the following error when reading a record from the database:
Is this a known problem? Seems the database is returning a string without the timezone.
In the database the record is stored as "2015-10-16 15:56:04.041+02"
Thank you