efcore / EFCore.CheckConstraints

An Entity Framework Core plugin to automatically add check constraints in various situations
Apache License 2.0
290 stars 14 forks source link

Range check constraints not generated for some datatype #120

Open molinch opened 10 months ago

molinch commented 10 months ago

It seems like Range check constraints aren't always generated. If the datatype is int it always works, however if we use byte or float then it's not always generating them.

Are you aware of this limitation? I can probably create a sample that demonstrates this.

roji commented 10 months ago

@molinch no, AFAIK there's not supposed to be such a limitation - can you please submit a minimal, runnable code sample?

molinch commented 10 months ago

I created a repro sample at https://github.com/molinch/EntityFrameworkRelatedBugs I found two things:

To save you time:

roji commented 10 months ago

Thanks. Just to set expectations, it will take me a while to get around to investigating this, as there's a lot going on at the moment.

molinch commented 10 months ago

I am already grateful that you will look into this 👍 And this is a side project so clearly understood

kjkrum commented 9 months ago

Just encountered this with a byte column. Using MySQL 8.0.34 with EF Core 7.0.11 and the Pomelo 7.0.0 provider, in case it matters.

Logerfo commented 8 months ago

It's not working for decimal as well, and I can tell why: https://github.com/efcore/EFCore.CheckConstraints/blob/0a7422d7a35a26a93c5802e332a171a6e2bcbd29/EFCore.CheckConstraints/Internal/ValidationCheckConstraintConvention.cs#L111-L118 As RangeAttribute only accepts int or double as arguments, the piece of code mentioned above will reject the attribute if the property type is float or decimal, for instance, so nothing will happen.

@roji what do you suggest here? Should we have a list of acceptable types instead of just comparing dynamically? I think they would be: int, uint, double, float, decimal, byte, short, ushort, long, ulong. Maybe INumber would help? Any other idea?

roji commented 8 months ago

Looking at the code, simply accepting other types and trying to pass them into the type mapping's GenerateSqlLiteral may or may not work (depending on what that function expects and how strict it is).. So I think it would be safer to convert to the target type first (i.e. have a set of conversions for int, and another for double). How does that sound?

Logerfo commented 8 months ago

@roji Since it's possible to write create table foo (bar int check (bar > 1.2)); and it works fine, maybe we could have something simpler. The type mapping used to call GenerateSqlLiteral could be always associated with double (since the other possible argument type int can always be converted to double AFAIK) and we could check only if the property type is assignable/castable to double. Another way to check it would be looking for a the operator x > y in the database, where x is the DbType for the property type (ClrType?) and y is the DbType associated with the double ClrType, which I think is the homonym DbType.Double. Maybe that would allow the attribute to work for a wider range of cases, considering user defined types or operators, tied to the resources available in the currently connected database. What do you think?

roji commented 8 months ago

Since it's possible to write create table foo (bar int check (bar > 1.2)); and it works fine, maybe we could have something simpler. The type mapping used to call GenerateSqlLiteral could be always associated with double (since the other possible argument type int can always be converted to double AFAIK) and we could check only if the property type is assignable/castable to double.

I don't think that's a good idea - I'm not sure all databases accept (i.e. implicitly convert) 1.2 as int, and even if they did, I wouldn't want to see 1.0 in my SQL when a simple int was passed to [Range].. Simply supporting both doesn't seem like much work either.

Regarding the rest, I'm afraid I don't understand what you mean exactly, e.g. what does "looking for an operator in the database" mean? Maybe submit a quick code sample or similar to illustrate?

Logerfo commented 8 months ago

@roji I don't think that what's happening is an implicit conversion between double and int, but rather just a comparison between them, which, aside from system-defined operators, could also be achieved through user-defined operators... that's why I suggested "looking for an operator in the database". I don't know if there's a better way to do this (e.g. a system view for operators) but, at least in postgres, running select null::int > null::decimal returns null, which means the operator > exists between the types int and decimal, while running select null::text > null::decimal returns an error, which means the operator > does not exist between the types text and decimal. This could be the key information to gather in order to decide whether to emit or not the check constraint from a RangeAttribute, being user-proof (since the operator could exist if user-defined in the connected database) and future-proof (since the operator could be system-defined in a future DBMS version).

roji commented 8 months ago

@Logerfo this plugin can't start sending queries to the database in order to know whether operators exist - that is well beyond its scope. In addition, the plugin needs to do its work in situations where there's no database at all, e.g. when adding a new migration.

The task here is really not that hard - just a small static translation table from the two types actually supported by [Range] (int and long) to the CLR type of the type mapping of the property.

airbreather commented 1 month ago

Stumbled into this for a decimal property.

The fix for me was to do like _ = attribute.IsValid(null); right before the rest of AddRangeConstraint. This ensures that the Minimum and Maximum properties are initialized to the correct type.

This workaround might only work if you are using the three-parameter attribute constructor, though... at least, I didn't test it with the others.