Closed otaviojava closed 1 year ago
Otavio,
I think there may be some confusion as to the purpose of the TCK. The TCK is supposed to be written to test EVERYTHING that is written in the SPEC/API. All of the dependencies you removed are necessary to write a TCK that meets that goal.
I'll describe what each one of these is for:
<dependency>
<groupId>org.jboss.arquillian.protocol</groupId>
<artifactId>arquillian-protocol-servlet-jakarta</artifactId>
</dependency>
<dependency>
<groupId>jakarta.servlet</groupId>
<artifactId>jakarta.servlet-api</artifactId>
<version>${jakarta.servlet.version}</version>
<scope>provided</scope>
</dependency>
We have to be able to write tests against web profile that uses servlet as a communication mechanism between the TCK and the platform the test is run on. Why do we need to test against web profile? Because our API defines how repositories should interact with Jakarta Transactions which are in web profile: https://github.com/jakartaee/data/blob/main/api/src/main/java/jakarta/data/repository/Repository.java#L505-L525
<dependency>
<groupId>jakarta.inject</groupId>
<artifactId>jakarta.inject-api</artifactId>
<version>${jakarta.inject.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>jakarta.enterprise</groupId>
<artifactId>jakarta.enterprise.cdi-api</artifactId>
<version>${jakarta.enterprise.cdi.version}</version>
<scope>provided</scope>
</dependency>
We have to use the @Inject
annotation to write tests so that we can get an instance of a repository from the vendor. Our spec specifically calls out what is expected when a platform is using CDI as the DI platform and thus we need to be able to test it:
https://github.com/jakartaee/data/blob/main/spec/src/main/asciidoc/repository.asciidoc#jakarta-data-cdi-extension
Looking back at the TCK there is an instance I used CDI in an instance where I cannot guarantee the test is running on Core profile or above:
Fixing under issue: #142
<dependency>
<groupId>jakarta.nosql</groupId>
<artifactId>nosql-core</artifactId>
<version>1.0.0-b6</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
Honestly, I don't understand how we could write a TCK without having Persistence and NoSQL entities provided by these APIs. Our spec specifically says these two entities need to be supported. https://github.com/jakartaee/data/blob/main/spec/src/main/asciidoc/repository.asciidoc#entity-classes
In essence, AFIK, we should be agnostic to annotations and the DI engine, so this dependency, at least to the core, should not be here.
We cannot be agnostic because we need to test ALL scenarios, including entity annotations from both Persistence, and NoSQL, when entities have no annotations, and when repositories are injected using CDI.
The thing you have to realize is that an implementation doesn't need to support everything it will depend on where they sit on the profile spectrum Standalone | Core | Web | Full
@KyleAure
About transactions, remember that we put that, and we could remove these transaction details. I've proposed to specialize it to JPA.
IMHO: it is a bug; the repository must be agnostic instead of only working with JPA and occasionally with NoSQL.
We cannot be agnostic because we need to test ALL scenarios, including entity annotations from both Persistence, and NoSQL, when entities have no annotations and when repositories are injected using CDI.
If they use the JAR-RS client as a persistence layer, they will use JSON Binding's annotations. How can they test it?
If they use the JAR-RS client as a persistence layer, they will use JSON Binding's annotations. How can they test it?
The intention is for the TCK instructions to allow manually swapping out the Jakarta NoSQL/Persistence entity annotations on the common entities with custom entity annotations/model defined by the Jakarta Data provider. I wouldn't expect Kyle to have added that to the instructions yet, but it should be coming to address scenarios like that. If REST does get added officially to the specification at some point, then the annotations could be put directly onto the TCK provided entities, but there will always be a need for instructions to allow for manually swap in order to allow for testing custom entity models, whether this one ends up being custom or not.
If you do it manually to all possible combinations, it will become an endless TCK. We'll need to duplicate it and eventually lose the focus of our database integration.
We can delegate it to the vendor exploring the inversion of control. I'll create a prototype in a different repository to show that it is possible :)
To clarify, the manual replacement step would be taken by the vendor who introduces the custom entity model, not us. This does not make an endless TCK. But it certainly isn't how we want the TCK to be either, so if you have ideas for other solutions, please look into it. A better approach would be preferable, so I'm definitely interested in seeing what you come up with.
I created this proposal: https://github.com/soujava/data-tck-proposal
The cons:
Pro:
Challenge:
The proposal at https://github.com/soujava/data-tck-proposal abstracts out both the @Inject
and the entity. This would allow a vendor to certify an implementation that doesn't allow injection via jakarta.inject.Inject
contrary to the requirements of the specification. What would it look like if the LibrarySupplier
were removed from the proposal in favor of using @Inject
? Does the remaining supplier for plugging in entity interfaces work on its own?
Let me test; I'll create a tag and try this scenario.
I wouldn't say I liked Awaitility because of the BASE on the database, however, I did not find anything better.
//BASE
Awaitility.await().atMost(DURATION)
.until(() -> {
repository.findById(book.isbn()).ifPresent(reference::set);
return reference.get() != null;
});
We can do it; I've updated this version.
CDI does not work with a broad inbound generic like this:
@Inject
private CrudRepository<? extends Book, String>> repository;
So, I created this LibrarySupplier
as a workaround:
public interface LibrarySupplier {
<T extends CrudRepository<? extends Book, String>> T get();
}
If it supposes to be DI agnostic, why are we putting Inject
as required? I could use Autowired
and still work with this spec, right?
I got that we should cover the Jakarta EE platform aspect. Does TCK allow modules? Thus, we could have a module focusing on the data integration complexity:
Another one focus on the Jakarta EE platform world:
If it supposes to be DI agnostic, why are we putting Inject as required? I could use Autowired and still work with this spec, right?
This is a very hard question to answer because the TCK needs to have an injection point, otherwise, we cannot get implementations from vendors running the TCK.
The (controversial) counter-argument I would make is if the easiest way to write the TCK is to ask vendors to create SPIs just for the TCK. Then should we consider removing DI from the spec altogether and instead have a standard SPI for providing a repository in the spec so that the standalone and platforms can follow the same standard?
I got that we should cover the Jakarta EE platform aspect. Does TCK allow modules?
We don't need to use modules for this, we already have this functionality with the tiered annotations @Standalone
for the data integration and @Core/@Web
for everything else.
The problem is that tests written for @Standalone
also need to be runnable on a platform OR we have to duplicate our tests.
If it supposes to be DI agnostic, why are we putting
Inject
as required? I could useAutowired
and still work with this spec, right?
It makes sense that jakarta.inject.Inject
is required for injection because Jakarta Data is a Jakarta EE specification, and as such, standardizes on the Jakarta platform. With Jakarta specifications you can write applications fully according to Jakarta EE standards. The TCK verifies that this can be achieved. The specification is not supposed to be Dependency Injection agnostic. It is a Jakarta spec and expected to follow Jakarta standards. As I've brought up before, the rest of the Jakarta world would have every right to insist upon mandating Jakarta CDI as well. However, we have intentionally architected the spec in a way that allows us to avoid mandating the use of Jakarta CDI for the sake of the providers with their own custom dependency injection, which can leverage the standard jakarta.inject.Inject. How dependency injection is achieved via jakarta.inject.Inject (whether by CDI or custom) becomes an implementation detail that the specification need not be concerned with because it does not impact how the application is instructed to use the Jakarta Data spec. The specification does not and should not (and thus the TCK does not and should not) test Autowired
and other API of vendors' custom dependency injection. Vendors are free to make that work as vendor-specific API and have the full responsibility of any tests they want to perform as part of that effort.
Bean Validation works being DI agnostic; the same happens with JPA.
Thus, are they doing it wrong?
Sorry, I'm not familiar with Jakarta EE TCK.
P.s.: I'm not saying you should not test all the Jakarta EE platforms with the spec; we must! My question is not; why not on modules? In my head, I can see a kind of Pyramid test where we get more isolation to focus on data integration and the top all Jakarta EE integration with the spec as a module.
I guess a case sounds better:
Microstream is a database supporting frameworks such as Quarkus, Microprofile, Micronaut, and Spring. How can they run the TCK for those extensions?
Bean Validation works being DI agnostic; the same happens with JPA.
Thus, are they doing it wrong?
Could you be more specific about what in the Bean Validation and JPA specifications you are concerned might be wrong? The fact that vendors have leveraged these technologies in creative vendor-specific ways on top of what is defined in the specification does not make the specifications wrong. That includes the innovation of the repository pattern itself which we are now standardizing back into Jakarta EE as Jakarta Data.
Microstream is a database supporting frameworks such as Quarkus, Microprofile, Micronaut, and Spring. How can they run the TCK for those extensions?
There are two sets of vendors that must run the TCK:
If Microstream or any other vendor creates a product that is a Jakarta Data provider, it will run the TCK in standalone mode.
If Microstream or any other vendor creates a product that is a Jakarta EE profile that includes Jakarta Data then it will run the TCK in the mode for the respective Jakarta EE profile.
If it creates a product that is neither of the above, but instead something that integrates with Jakarta Data/providers then there is no requirement or expectation to run the TCK. That is not what the TCK is for. The TCK has no obligation to try to provide extra coverage for vendor-specific extensions or customizations that aren't themselves implementations of the spec. It is enough for the TCK to cover spec-defined function. That is all it is for. We should not be expending time and resource trying to go beyond that. Vendors will take care of their own extensions and optional function that they add and write whatever of their own testing they wish to cover it.
Could you be more specific about what in the Bean Validation and JPA specifications you are concerned might be wrong? For example, I can use Bean Validation without
Inject
, and the same happened with JPA.
I agree with you about the targets:
We agree that we must have that TCK audience, and if we agree about those points, why should I have the inject
instead of focus on the goal of the spec:
Jakarta Data provides an API to allow easy data access technologies. Thus, a Java developer can split the persistence and the model with several features such as a Repository interface with the method by query, where the framework will implement it.
We agree that we must have that TCK audience, and if we agree about those points, why should I have the
inject
instead of focus on the goal of the spec:Jakarta Data provides an API to allow easy data access technologies. Thus, a Java developer can split the persistence and the model with several features such as a Repository interface with the method by query, where the framework will implement it.
Within your question is the incorrect assertion that Inject
is somehow opposed to the goals of the specification. Let's go through the parts of the specification goal individually:
"Jakarta Data provides an API" - jakarta.inject.Inject
is consistent with that. It is official Jakarta EE API that can be standardized upon.
"allow easy data access technologies" - again, jakarta.inject.Inject
helps to achieve this goal. By standardizing the approach to injection, you get to follow simple steps to write the application one easy way and it works everywhere, regardless of the provider. Not having this would mean the complexity of needing to figure out how to write the application differently for every provider.
"Thus, a Java developer can split the persistence and the model with several features such as a Repository interface with the method by query, where the framework will implement it." - jakarta.inject.Inject
helps here too by standardizing how the application obtains what the framework implements.
jakarta.inject.Inject
is not only consistent with, but is a key part of achieving the specification goals. It is wrong to characterize it as one instead of the other.
Within your question is the incorrect assertion that Inject is somehow opposed to the goals of the specification. Let's go through the parts of the specification goal individually:
I never said that; we agree that it is crucial, and we must test not just it but the whole Jakarta EE platform, such as CDI, Bean Validation, JTA, etc.
My point is once it is data integration, the main focus or MVP should guarantee this point and then move to the next one.
The Minimum Viable Product for a TCK is to test what the specification defines. The draft list of TCK scenarios, currently focuses predominantly on data access via the repository pattern because that is mostly what the specification covers, with just 3 of 62 scenarios currently written that mention aspects like JTA and CDI (when available in a profile) which are also part of the spec. There are currently 0 scenarios listed for Bean Validation because the Jakarta Data spec currently says nothing about it. It would be nice to get back to writing these tests rather than debating about decisions that were previously made. If we do need to add something to the specification for Bean Validation, that should be a separate issue. Can we close this one?
We can close it as a "won't fix" and define it only for JPA on Jakarta EE, or maybe instead of being a spec, focus on being a feature on the JPA spec.
Because we're not looking for data. There is more, beyond Microstream, such as Cassandra Mapper
Where is used:
@Dao
public interface ProductDao {
@Select
Product findById(UUID productId);
@Insert
void save(Product product);
@Delete
void delete(Product product);
}
Hey Kyle, Nathan, could you help me?
I started to take a look at the TCK to begin writing the tests finally :) I understood that we could run the TCK without annotation and DI.
But when I removed those dependencies, it broke the build: https://github.com/jakartaee/data/blob/main/tck/pom.xml
In essence, AFAIK, we should be agnostic to annotations and the DI engine, so this dependency, at least to the core, should not be here.
I could explore OOP design with inversion of control where we'll provide SPI interfaces: to the entity and supplier of the repository.
E.g.:
Thus, each vendor will implement it with the annotations and the desirable DI.
Eclipse JNoSQL:
JPA vendor:
Did I miss something, or is it missing code?
If it still needs this part, I can write without Archilian as a prototype, and then you can translate it here.
Ps: with the JPA specialization, it won't impact the JPA tests, but it will go only to JPA.