Closed emmanuelbernard closed 5 years ago
BTW I found another use-case for the "explicit save VS implicit + rollback" debate that I should mention:
If you have explicit save, you can save some entities, then validate others and fail the request. Suppose for example that you need to record failed log-in attempts, or user actions. You would save a log record, then proceed to do your action, fail it and report to the user. With implicit save and rollback it's a little harder.
I don't think that alone sways the decision, but I just wanted to record the argument.
Initial support for DAOs is in (in my branch).
I'll do a PR to merge this all in master tomorrow.
So, for the getter/setter, the strategy I had in mind is to:
EntityBase
if they're not transient (modifier/annotation), and if they don't already exist (user written)This way, if the user adds or removes getter/setter it should be binary compatible for all call-sites.
Now, those getter/setters would probably call the $$_hibernate_read_field
/$$_hibernate_write_field
, but I'm a bit annoyed because ATM hibernate already rewrites some field accesses, but only for those within the same class. So for example in class Foo
it rewrites field access to the fields of Foo
but not to those of its superclass FooSuper
.
Ideally I would rewrite field accesses before hibernate gets to them, so that they all go via the getter/setters and only that getter/setter would call the $$_hibernate_read_field
/$$_hibernate_write_field
methods, but I wonder if that's the right thing to do within the entity class itself: should I really replace all field accesses including in the constructor and all? WDYT @Sanne?
Also, it's pretty trivial to rewrite field accesses to accessor methods, except for:
- generate getter/setter for each public field [..] and if they don't already exist (user written)
do you plan to actually check the bytecode of each getter/setter to verify which field they are accessors of? It's probably safer than toying with naming patterns.
It would also be a good place to verify there isn't actual logic in those accessors: if I'm coding my app for field access yet have some accessors, I might have something in my accessors that I'd not expect to trigger when doing field writes within my DAO logic. I'd suggest that if such accessors already exist and have some unexpected bytecode, to create different accessors for us to use.
Regarding field accessors in constructors, I don't think you should instrument those either - but I'm not sure I understand all implications.. I guess we'll need to toy with some very evil tests :)
Also, it's pretty trivial to rewrite field accesses to accessor methods, except for:
- transient fields (need to know they are transient)
Can ignore those? Do you mean @Transient or the transient
modifier?
do you plan to actually check the bytecode of each getter/setter to verify which field they are accessors of?
No, I don't think this is reliable, and that's what the JB convention is made for.
It would also be a good place to verify there isn't actual logic in those accessors
I don't think so. We should document that accessors are there purely to workaround the lack of Java language support for properties, to represent abstraction, and are always used in place of field accesses in order to maintain bytecode compatibility in the face of changing your mind whether you need a getter or not. They have a pretty handy by-product of enabling dirty checking too.
If users want accessors that are not used by the system, nor by hibernate, they should make them transient accessors and use a different name. I think the limitations are justified, coherent and easy to document, and in the face of it, I doubt a lot of people will actually write accessors.
Can ignore those? Do you mean @transient or the
transient
modifier?
I mean both. Yes we need to ignore those, but that means knowledge of the field itself, which means first building a model of the entities before instrumenting the bytecode to replace field accesses. That's not hard, or costly, actually.
So, after some tracing, I thought I was being smart by accident with the Hibernate processor depending on a type produced by the Panache processor, which meant that Hibernate would register its enhancers after Panache, which is what happens:
Registering Panache enhancer for model class org.jboss.shamrock.example.panache.Person
...
Registering Hibernate enhancer for model class org.jboss.shamrock.example.panache.Person
But then when I trace the enhancers' work they're sort of intermingled backwards, where the Panache enhancer starts first indeed, but then causes the Hibernate enhancer to start and end before the Panache one:
Running Panache enhancer for model class org.jboss.shamrock.example.panache.Person
Running Hibernate enhancer for model class org.jboss.shamrock.example.panache.Person
End Running Hibernate enhancer for model class org.jboss.shamrock.example.panache.Person
End Running Panache enhancer for model class Lorg/jboss/shamrock/example/panache/Person;
End2 Running Hibernate enhancer for model class org.jboss.shamrock.example.panache.Person
So I have to be smarter about this somehow.
Also, it appears that some classes get a default constructor injected by Hibernate, but not others. What's the criteria?
Sorry my bad, the constructor is actually generated by javac due to having some fields initialised.
Something doesn't add up in the ordering of the enhancers. Tracing indicates the Panache enhancer runs before Hibernate reads the bytecode and enhances it, but my Panache enhancer sees lots of field accesses for $$_hibernate_
fields, before the hibernate enhancer runs. I need to figure out how that can happen.
OK, I had it wrong, indeed if I want Panache enhancers to run first, I need to register them second, which can be achieved with some marker build items. I got this working.
I've got everything except validation/rollback.
I can't quite figure out a good API for validating entities at this point and wonder if it should not wait until after the first version?
In particular, I can add boolean EntityBase.validateOrRollback()
which is the sorta-reversed version of Play's boolean Entity.validateAndSave()
but we don't have (yet?) Play's accumulation of validation errors in a request-scoped Validation
object ATM. Unlike its HV namesake, this is the entry point to checking validation and collecting all current request validation errors, which we can keep accumulating until we know what to do with the errors.
So if I write EntityBase.validateOrRollback
I either drop the validation errors (after I set rollbackOnly
) to return false
, or I return a set of errors, but then the API is not nice because it forces everyone to deal with errors right away, or call isEmpty()
on the set.
I think we should stay clear of validation for the very first release and polish the rest, and revisit validation more globally, perhaps by giving users the option to accumulate validation during the request.
WDYT @Sanne @emmanuelbernard @gsmet ?
Also, I tried calling EntityManager.getTransaction()
and it started yelling at me very rudely about not supporting transaction or something?
Caused by: java.lang.IllegalStateException: Not supported for JTA entity managers
at org.jboss.shamrock.hibernate.orm.runtime.entitymanager.TransactionScopedEntityManager.getTransaction(TransactionScopedEntityManager.java:412)
at org.jboss.shamrock.hibernate.orm.runtime.entitymanager.ForwardingEntityManager.getTransaction(ForwardingEntityManager.java:258)
at org.jboss.shamrock.example.panache.TestEndpoint.testValidation(TestEndpoint.java:392)
Appears I have to use TransactionManager
. I really think this EntityManager.getTransaction()
that throws sometimes is a deathtrap.
We are rude but that's for your own good.
regarding validation: deferring that feature is fine IMO.
+1 on polishing what we have first.
@Sanne should we express in the doc that people should not use EntityManager.getTransaction()
? If you +1, let's open a issue and close that small one.
should we express in the doc that people should not use
EntityManager.getTransaction()
?
I will document that Hibernate ORM is automatically configured in JTA mode "out of the box", but I wouldn't such additional detailes to our docs, I wouldn't know where to draw the line and this quickly becomes a copy of the full 400 pages reference.
That this method is unavailable is made very clear in both the Hibernate ORM docs and the javadoc of the method already. At some point we'll have to trust people to either know Hibernate already, or read its full docs / books.
FYI the javadoc is:
/**
* Return the resource-level <code>EntityTransaction</code> object.
* The <code>EntityTransaction</code> instance may be used serially to
* begin and commit multiple transactions.
* @return EntityTransaction instance
* @throws IllegalStateException if invoked on a JTA
* entity manager
*/
Until today, after about 15 years of using Hibernate I had no clue what "resource-level" or "JTA" meant, and how it would make my life difficult. I agree we don't have to dumb-down all the docs, but in general we suck at approaching both API docs and exception messages because, for example here, we don't explain how the user can check if the transaction is in either resource-level or JTA mode, so they can't even safely call this API safely, and we also don't tell them what the alternative API to use in the JTA case. Similarly the exception message is Not supported for JTA entity managers
which again does not tell me how to 1/ avoid the exception in the other case, and 2/ how to deal with this case.
Again, I'm not saying we should or could fix this particular example, and I'm certainly guilty of the same doc/exception mistakes myself. But really we should try our best to not make these mistakes in Protean.
When I asked about why we did enable JTA by default, I was answered "why not?".
If it changes radically what a user can do, I wonder if we should maybe not make it the default?
I do agree that it sucks and I don't know what the JPA spec leads had in mind when they decided that this method would be exposed like it is.
But this has been "fine" for people apparently for many years, so I just don't think debating this in an unrelated PR is the best place.
If you all have a concrete actionable idea, open an issue.. otherwise let's move on. Unless you think this is a deal breaker for Panache's usability?
That's the JPA API, it's full of such weirnesses so if that's not ok maybe we'll want to expose a brand new, minimalistic API?
But this has been "fine" for people apparently for many years
I don't know if that many people use JTA. At least we didn't for our apps in $previousJob
and I haven't seen a lot of people using it when doing consulting. If it changes significantly the user expectations then we will have to guide them. That's all we are saying I think.
If you all have a concrete actionable idea, open an issue.. otherwise let's move on. Unless you think this is a deal breaker for Panache's usability?
No, I didn't mean to make a big deal out of this, sorry, it's certainly anecdotal.
A lot of our customers use JTA.
I've got everything except validation/rollback.
I can't quite figure out a good API for validating entities at this point and wonder if it should not wait until after the first version?
In particular, I can add
boolean EntityBase.validateOrRollback()
which is the sorta-reversed version of Play'sboolean Entity.validateAndSave()
but we don't have (yet?) Play's accumulation of validation errors in a request-scopedValidation
object ATM. Unlike its HV namesake, this is the entry point to checking validation and collecting all current request validation errors, which we can keep accumulating until we know what to do with the errors.So if I write
EntityBase.validateOrRollback
I either drop the validation errors (after I setrollbackOnly
) to returnfalse
, or I return a set of errors, but then the API is not nice because it forces everyone to deal with errors right away, or callisEmpty()
on the set.I think we should stay clear of validation for the very first release and polish the rest, and revisit validation more globally, perhaps by giving users the option to accumulate validation during the request.
WDYT @Sanne @emmanuelbernard @gsmet ?
Right, what I am realising is that Panache can be made much better if the RESTEasy counterpart is added and does some smooth transition of state like:
Let's close this issue since it shipped with the first release.
I've jumped into the same issue. Since the first version is released i wonder if this should be reopened?
Did you hit a bug @4ntoine? This issue was more a list of features we wanted for Panache. That's why it's closed. Or are you referring to a specific feature mentioned here?
@emmanuelbernard is there any chance to use hibernate-panache
also in wildfly or is and will it only be exclusivly available for quarkus?
I think, hibernate-orm
will also benefit from panache capabilities. Having something like hibernate-orm-panache
would simplify hibernate development in general.
The problem is that this relies on build-time enhancements inherent in Quarkus. Someone could write an APT processor to do the same, I guess.
Project Panache goal is to make the simple JPA tasks (CRUD, simple queries etc) very simple and the general needs provided (pagination etc) so people can focus on the complex parts.
We start with the
find
helper method approach as it's mildly more verbose than method name guesses and it scales better in complexity.Here are the list of tasks for first public releases:
validate
methodrollbackOnly
methodRemember we keep the JPA semantic
Here are the list of tasks optional or after first public releases:
Each bullet point can become a sub issue when you deliver it. That way we can track progress for people consuming the feature. Edit this description and add the sub issue number in the relevant checklist line.
When push comes to shove, we can make this project a nice-first-public-release. But we think that's manageable in first.