JPA provides no way to specify the precision of SQL time and timestamp types (in fractional seconds).
For some time Hibernate has been abusing the precision member of the @Column annotation for this, but this has a bit of a problem in that its default value is precision=0 [which is actually intended to be interpreted as null] and 0 is both (a) a meaningful/useful value, and (b) not the best default for a timestamp.
Therefore, I think we should formalize something in the JPA spec, by defining:
the default precision for a time (0 digits of fractional second precision, IMO)
the default precision for a timestamp (3-6 digits of fractional second precision, depending on the database, IMO)
a way for the user to explicitly specify the precision.
Now, there's three options I can think of for part 3:
change the default of the precision member of the @Column to -1, so that the defaulted value can be interpreted based on the type,
add a fractionalSecondsPrecision member to @Column, or
introduce a new @FractionalSeconds annotation.
The disadvantage of the first option is it's in principle a breaking change.
JPA provides no way to specify the precision of SQL
time
andtimestamp
types (in fractional seconds).For some time Hibernate has been abusing the
precision
member of the@Column
annotation for this, but this has a bit of a problem in that its default value isprecision=0
[which is actually intended to be interpreted asnull
] and0
is both (a) a meaningful/useful value, and (b) not the best default for atimestamp
.Therefore, I think we should formalize something in the JPA spec, by defining:
Now, there's three options I can think of for part 3:
precision
member of the@Column
to-1
, so that the defaulted value can be interpreted based on the type,fractionalSecondsPrecision
member to@Column
, or@FractionalSeconds
annotation.The disadvantage of the first option is it's in principle a breaking change.