Open appreciated opened 2 months ago
We discussed this internally. Are inclined to accept such a PR.
Could you layout, how the annotation would look like. I expect it would contain some SQL-snippet String?
Thanks for your response @schauder and the others on the team.
Here is my first draft for the new annotations for Spring Data JDBC. They are conceptually very similar to the FilterDef and Filter annotations used in Hibernate.
Here are the proposed annotations, keeping the naming convention close to that of Hibernate to provide familiarity to developers who have used Hibernate, although we should consider the potential for confusion this might cause:
As you may notice I left out the parameters
atttribute from FilterDef
. This is only a proposale to reduce the initial development effort on my end.
@FilterDef
This annotation can be used to define a filter at the entity level. It provides a way to define a reusable condition that may be applied to queries.
@FilterDef(
name = "someExample",
defaultCondition = "isDeleted IS NULL"
)
@QueryFilter
This annotation is intended for use in repository methods to specify which filter to apply when executing the method.
/**
* Annotation to be used in repository methods or at the repository class level
*/
@QueryFilter(name = "someExample")
@TableFilter
This annotation can be used at the model level, alongside the @Table
annotation, to specify a filter that applies to all operations involving the annotated entity.
/**
* Annotation to be used at the model level alongside the @Table annotation
*/
@TableFilter(name = "someExample")
@QueryFilter
and @TableFilter
?I look forward to your thoughts and feedback on this initial concept.
If child entities can inject custom filtering conditions, why not give the same mechanism to the aggregate root? Combined with the existing SpEL mechanism, this can also achieve multi-tenancy in the shared mode.
These are some of my ideas, which directly provide the highest flexibility.
Soft deletion and shared mode multi-tenancy and some other possible requirements are based on some conditions to add sql condition fragments at the end. It is better to directly provide such an extension mechanism.
The automatic implementation method of the repository will complete the sql fragment internally. The @Query annotation type directly uses the placeholder replacement method to reduce the sql parsing.
Since I currently know very little about the implementation of data, this solution only provides an angle for the data team to evaluate the feasibility.
At present, our team uses data-jdbc in new projects, and it is indeed difficult to get started with shared mode multi-tenancy.
If you are interested in this idea, please reply and communicate further. The following is a sample example
/**
* If the support parameter is met, run the filter to return the WHERE condition fragment of the SQL
*
* @param <E> entity class
*/
public interface ConditionFilter<E> {
/**
* filter pre judgment
*
* @param eClass entity class
* @param methodHandle repo method
* @return whether to use filter or not
*/
boolean support(Class<E> eClass, MethodHandle methodHandle);
/**
* return WHERE condition fragment of the SQL
*
* @param eClass entity class
* @param entity real entity object
* @param methodHandle repo method
* @param parameters repo method parameters
* @return WHERE condition fragment of the SQL
*/
String filter(Class<E> eClass, Object entity, MethodHandle methodHandle, Object[] parameters);
}
public interface XxRepo {
// %filter is ConditionFilter placeholders
@Query("select * from x_x where name = :name and %filter")
Optional<Object> findByName(String name);
}
@DreamStar92 Thanks for thoughts on this matter. Your proposal seems to me like a simple and flexible solution for many problems that reuses existing mechanisms. While I must admit that I'm not sure if I fully grasped your suggestion. It seems to me that it solves a slightly different, issue. It allows reusing the same filter multiple times for queries etc. and allows writing more "DRY" queries.
This issue is about how MappedCollection
are being resolved by default, and how to add more control to it:
Let's take your Repository and modify it a bit:
public interface XxRepo {
// %filter is ConditionFilter placeholders
@Query("select * from x_x where name = :name and %filter")
Optional<SomeModel> findByName(String name);
}
Let's also assume that the fetched Model has some external data:
class SomeModel {
...
@MappedCollection(idColumn = "...", keyColumn = "...")
private Collection<SomeOtherModel> someOtherModels = Collections.emptyList();
}
If no custom rowMapper
is being used, someOtherModels
will eventually be filled with rows that are already "deleted".
@DreamStar92 did I get it right or did I miss something?
When your query result is an aggregate root, data-jdbc will load the sub-entities for you.
The generation of SQL corresponding to this sub-entity also goes through ConditionFilter.
The method parameters of ConditionFilter can be designed according to the actual situation.
The purpose of this solution is to programmatically decide under what circumstances to add conditions.
Let’s take an example of a process.
public interface XxRepo {
// %filter is ConditionFilter placeholders
@Query("select * from x_x where name = :name and %filter")
Optional<SomeModel> findByName(String name);
}
class SomeModel {
...
@MappedCollection(idColumn = "...", keyColumn = "...")
private Collection<SomeOtherModel> someOtherModels = Collections.emptyList();
}
First, ConditionFilter will run support to receive the MethodHandler and SomeModelClass of findByName and then execute filter to obtain the corresponding sql fragment.
Because this findByName is @Query, we use placeholder replacement to spell in the condition, so that the sql statement of the aggregate root is completed.
Next, data-jdbc will query the corresponding subentity for the result.
At this time, the support method of ConditionFilter will receive MethodHandler and SomeOtherModelClass.
At this step, we can control whether ConditionFilter is applied to subentity loading.
If so, the next filter The returned fragments will be spliced internally by data-jdbc (which itself should be similar to @Version's mechanism).
The placeholder is just to reduce SQL parsing when the user provides SQL. When generating SQL internally in data-jdbc, it can be easily spliced.
@appreciated I don’t know if my description is clear enough? Your further responses are welcome.
@appreciated @schauder What do you (your team) think?
@DreamStar92 I did not have the time yet to dig into the Spring Data JDBC source code. So currently I cannot answer this. @schauder could you give us some input on this matter?
Hi we had some discussion on the matter an concluded, that we need some more thinking time, since a feature like this will have quite a ripple effect.
Vacation time is coming up, so it will take some time until we can cast a verdict how we want to proceed.
Summary
In the current project I am working, I have encountered an issue with the lack of built-in support for
MappedCollections
used in conjunction with soft-delete functionality in Spring Data JDBC. The potential solutions discussed in the StackOverflow thread linked below seem to require a potential high maintenance effort when upgrading to new versions of Spring Data JDBC.Note: I have previously discussed this issue with @schauder and was advised to create this issue.
Abstract Goal
To add support for filtering
MappedCollections
in Spring Data JDBC, which would allow for more dynamic and controlled data handling, similar to the capabilities provided by Hibernate's@Filter
annotation.Research
As already mentioned, I found a relevant discussion on StackOverflow outlining several potential solutions (StackOverflow Discussion).
Proposed Action
I would like to offer to create a pull request to integrate this functionality into Spring Data JDBC. My preference is for a declarative method akin to Hibernate's
@Filter
annotation, but I am open to suggestions and willing to explore other viable options.Next Steps