dotnet / efcore

EF Core is a modern object-database mapper for .NET. It supports LINQ queries, change tracking, updates, and schema migrations.
https://docs.microsoft.com/ef/
MIT License
13.68k stars 3.17k forks source link

Flexible mapping to CLR types and members (Custom O/C Mapping) #240

Open rowanmiller opened 10 years ago

rowanmiller commented 10 years ago

This is a grouping of related issues. Feel free to vote (đź‘Ť) for this issue to indicate that this is an area that you think we should spend time on, but consider also voting for individual issues for things you consider especially important.


This an "epic" issue for the theme of improvements to mapping of different CLR constructs to standard database concepts. Specific pieces of work will be tracked by linked issues.

Done in 7.0

Done in 8.0

Done in 9.0

Backlog


Although we have had support for POCOs through a few major versions, using EF for persistence still places some undesired constraints on the design choices developers can make on their domain objects.

For example, EF Core still requires that mapped CLR types can be instantiated through constructors without parameters (this is only true for navigation properties now), that they contain both property getters and setters for each mapped scalar and reference property, and that all mapped collection navigation properties are exposed as properties that are of a type that implements ICollection<T>.

Note that POCO proxy functionality introduces additional constraints, e.g. lazy loading requires that property getters are virtual and change tracking proxies (not yet supported in EF Core) require that all properties are virtual at the same time that properties mapped to collections are effectively declared as ICollection<T>.

EF Core also requires each entity type in the model to be mapped to a distinct CLR type, which makes some dynamic model scenarios (#2282) harder.

Moreover, scalar properties on objects have to be of a small set of recognized types in order to be mapped (in EF Core the set of types supported nativly by the provider).

Richer constructs such as collections of scalars or collections of complex types, ordered collections, inheritance in complex types, collection manipulation methods, factory methods, and immutable objects are not supported.

This issue tracks the removal of those constrains as a whole and serves as a parent issue for some individual features that we will track independently:

An old hope

One of many interesting design strategies of EF as an O/RMs was that every EF model defined an abstract data model that is separate from the object model. When using EF in regular O/RM scenarios there are in fact three layered models involved: the storage/database model (often referred to as the S-space model), the conceptual/abstract data model (often referred to as the C-space) and the actual object model (also known as the O-space model). EF also makes use of two mapping specifications: the translation of data access operations between the C-space model and the S-space model is the most well-known and where most of the mapping capabilities of EF focus. The mapping between the C-space and the O-space model is less widely known and in fact since only trivial 1:1 mappings are supported.

There are many historical motivations for this underlying design, including the desire to provide a complete “weakly typed“ programming interface (often referred to as a “value layer”) that could be used in scenarios outside the traditional O/RM space, such as runtime metadata driven user interfaces and tools, database reporting, etc. EntityClient was intended to be such weakly typed API, but it had considerable restrictions (e.g. it was read-only) and some usability issues (e.g. it was designed after an ADO.NET provider without a public DbDataAdapter and with a quite complicated connection string format) on the first version of EF. Since its popularity never took off, the investments necessary to unleash the “value layer” were never completed. In the meanwhile, EF role in solving the traditional O/RM scenario of mapping strongly typed objects to database tables and functions became more and more relevant since EF4, so the lion share of the investments have gone into making EF easier to use in those scenarios, which in many cases meant burying the separation between the abstract data model and the object model even deeper under layers of new API.

In practice the object model can be considered for most intents and purposes a straight realization of what is in C-space. The most commonly used APIs on EF take advantage of this similarity and just conflates C-space and O-space seamlessly, abstracting away the fact that there is a layer of indirection. In other words, although this differentiation between C-space and O-space is real and very explicit in the implementation, it is purposely not evident in most of the EF APIs and usage patterns. A clear example of this is Code First. To a great degree, the usability of the Code First API relies on users being able to ignore the existence of an abstract data model.

While sharing the same meta-model for conceptual models has made it easier to expose EF models through things like OData services, the majority of EF users has never needed to know that there is a difference between the conceptual and object models of their applications.

Regardless of the past motivations for creating this extra layer of indirection, there is potential in the decoupling of the conceptual level (at which most of EF’s mapping, query and update pipeline operates) and the objects level. The basic idea is that by breaking the current status quo of 1:1 O-C mapping, it should be possible to extend EF to support much richer object mapping without having to touch at all the guts of the system.

rowanmiller commented 9 years ago

Here is a good example of this feature (good case to verify against once we implement it) - https://github.com/aspnet/EntityFramework/issues/1009

divega commented 9 years ago

@rowanmiller Looking at this it seems that the bug I filed yesterday is more specific about property access while this is the all-encompassing flexible mapping. Would you mind if I just add #2968 as a child, similar to how #857 is a child specific to collection patterns?

rowanmiller commented 9 years ago

@divega yep that works

divega commented 9 years ago

Otherwise I can merge it all here...

divega commented 9 years ago

Made some edits to the original text of the issue to add background and additional links to related issues.

biqas commented 8 years ago

Hi, the following example is for Illustration how maybe interface definition could be used to map EF structures.

public class BloggingContext : DbContext
{
    // Type mapping for materialization
    public DbSet<IBlog, Blog> Blogs { get; set; }

    // Materialization can create type on the fly because no specific type was given. (Generators or emitting)
    public DbSet<IPost> Posts { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<IBlog>()
            .Property(b => b.Url);

        // Optional: provide type creation factory.
        modelBuilder.Entity<IBlog>(() => new Blog());

        // Optional: provide collection behavior if non is provided then use default or emitt or generate.
        modelBuilder.Collection<IList<Any>>((IList<Any> collection, Any item) => collection.Add(item));
    }
}

// Any represents all kind of types which can be used in a collection.
class Any { }

public class DbSet<T, TEntity> where TEntity : class, T { }

public interface IBlog
{
    int BlogId { get; set; }

    string Url { get; set; }

    IList<IPost> Posts { get; set; }
}

public class Blog : IBlog
{
    public int BlogId { get; set; }

    public string Url { get; set; }

    public IList<IPost> Posts { get; set; }
}

public interface IPost
{
    int PostId { get; set; }

    string Title { get; set; }

    string Content { get; set; }

    int BlogId { get; set; }

    IBlog Blog { get; set; }
}

public class Post : IPost
{
    public int PostId { get; set; }

    public string Title { get; set; }

    public string Content { get; set; }

    public int BlogId { get; set; }

    public IBlog Blog { get; set; }
}

I know there are lot of other scenarios, so if someone has specific questions how for example x,y and z would fit in this kind of mapping, please ask, I will try to provide examples and explanations.

bjorn-ali-goransson commented 8 years ago

Actually, todays code base is quite close to being able to handle this. It just needs to be 'laxed in some parts. I mean the IClrPropertyGetter/IClrPropertySetter infrastructure is already there.

I got stuck at the type check in EntityType.AddProperty, I dunno if it would have worked otherwise (probably not...)

We need to be able to specify, when adding a Property:

1) Clr type (not needed in practice as PropertyInfo.SetValue takes object) 2) Resulting primitive backing type (for DB provider) 3) Getter Serializer (converts from Clr type to Resulting primitive type) 4) Setter Parser (converts from Resulting primitive type to Clr type)

And the type check mentioned above needs to not happen if(Flexible).

Here's a (hopefully somewhat) functioning example of what I'd like to achieve:

public class Context : DbContext
{
    private static readonly IServiceProvider _serviceProvider = new ServiceCollection()
        .AddEntityFrameworkSqlServer()
        .AddSingleton<ICoreConventionSetBuilder, MyCoreConventionSetBuilder>()
        .BuildServiceProvider();

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        optionsBuilder
            .UseInternalServiceProvider(_serviceProvider)
            .UseSqlServer(@"Data Source=.\SQLEXPRESS2014;Initial Catalog=EfCoreTest;Integrated Security=True;");
    }

    private class MyCoreConventionSetBuilder : CoreConventionSetBuilder
    {
        public override ConventionSet CreateConventionSet()
        {
            var value = base.CreateConventionSet();
            value.EntityTypeAddedConventions.Add(new MyPropertyDiscoveryConvention());
            return value;
        }
    }

    private class MyPropertyDiscoveryConvention : IEntityTypeConvention
    {
        public InternalEntityTypeBuilder Apply(InternalEntityTypeBuilder entityTypeBuilder)
        {
            entityTypeBuilder.Metadata.AddProperty(
                "Ancestors",
                propertyType: typeof(string),
                flexible: true,
                serialize: JsonConvert.Serialize,
                parse: value => JsonConvert.Parse<List<int>>((string)value)
            );

            return entityTypeBuilder;
        }
    }

    public DbSet<Page> Pages { get; set; }

    public class Page
    {
        public int Id { get; set; }
        public List<int> Ancestors { get; set; }
    }
}
bjorn-ali-goransson commented 8 years ago

@rowanmiller any thoughts or progress on this? I really think that just relaxing the checks a bit on this (and maybe generalizing the structure of Property a little) would get us 80% there.

( @divega )

Also, FWIW, I now think that we would be better off with a new AddFlexibleProperty method rather than further overloading the AddProperty.

ma3yta commented 8 years ago

@rowanmiller what is blocking to implement this. Maybe, I can help with this?

Antaris commented 7 years ago

My example of where this would be useful would be something like the following:

public class BlogPost
{
    // Other properties omitted for brevity...

    public TagSet Tags { get; set; }
}

In my database, I want to store tags as a flat list (not a relational table), purely for lookup efficiency. It's simple to test like WHERE [Tags] LIKE '%|MyTag|%' to return values, etc.

With this in mind, I could \ form a type, like the following:

public struct TagSet : ICollection<string>
{
    private string _tagString;
    private Lazy<HashSet<string>> _collectionThunk;
    private bool _modified;
    private bool _hasValue;

    public TagSet(string tagString)
    {
        _tagString = tagString;
        _collectionThunk = new Lazy<HashSet<string>>(() => ResolveCollection(tagString));
        _modified = false;
        _hasValue = true;
    }

    public int Count => _collectionThunk.Value.Count;

    public bool IsReadOnly => false;

    public void Add(string item)
    {
        _collectionThunk.Value.Add(item);
        _modified = true;
    }

    public void Clear()
    {
        _tagString = null;
        _collectionThunk = new Lazy<HashSet<string>>(() => ResolveCollection(null));
        _modified = true;
    }

    public bool Contains(string item)
    {
        if (string.IsNullOrEmpty(_tagString))
        {
            return false;
        }

        if (_collectionThunk.IsValueCreated)
        {
            return _collectionThunk.Value.Contains(item);
        }

        var culture = CultureInfo.CurrentCulture;
        return culture.CompareInfo.IndexOf(_tagString, $"|{item}|", CompareOptions.IgnoreCase) >= 0;
    }

    public void CopyTo(string[] array, int arrayIndex) => _collectionThunk.Value.CopyTo(array, arrayIndex);

    public IEnumerator<string> GetEnumerator() => _collectionThunk.Value.GetEnumerator();

    public bool Remove(string item)
    {
        if (!string.IsNullOrEmpty(_tagString))
        {
            _modified = true;
            return _collectionThunk.Value.Remove(item);
        }
        return false;
    }

    IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();

    private static HashSet<string> ResolveCollection(string value)
    {
        var hashSet = new HashSet<string>(StringComparer.OrdinalIgnoreCase);

        if (!string.IsNullOrEmpty(value))
        {
            value = value.Trim('|');
            foreach (string tag in value.Split(new[] { '|' }, StringSplitOptions.RemoveEmptyEntries))
            {
                hashSet.Add(tag);
            }
        }

        return hashSet;
    }

    public static implicit operator string(TagSet tags)
    {
        if (tags._modified)
        {
            if (tags._collectionThunk.IsValueCreated)
            {
                if (tags._collectionThunk.Value.Count == 0)
                {
                    return null;
                }

                return $"|{string.Join("||", tags._collectionThunk.Value)}|";
            }
        }

        return tags._tagString;
    }

    public static implicit operator TagSet(string tagString) => new TagSet(tagString);

    public override string ToString() => this;

    public bool HasValue => _hasValue;
}

Which accepts a string, such like |Hello||World||These||Are My||Tags|, and allows lazy evaluation of those as an ICollection<string. It can react to changes and allow me to rebuild my flat tag string when I need it to.

I wondered how I would like this to integrate into EF, and perhaps a custom interface (taking some inspiration from AutoMapper):

public interface IValueMapper<TSource, TTarget>
{
    TTarget Map(TSource source);

    TSource Map(TTarget target);
}

public class TagSetValueMapper : IValueMapper<string, TagSet>
{
    public string Map(TagSet target) => target;

    public TagSet Map(string source) => new TagSet(source);
}

Perhaps we could then configure this as part of the EF model:

builder.Property(m => m.Tags).IsOptional().Maps<TagSet, string>(new TagSetValueMapper());

Honestly no ideas about how you would handle LINQ expressions though.

Thoughts? Would this be achievable? Perhaps even using a custom type converter instead? Who would be responsbile for handling DbNull checking though? EF or a mapper?

divega commented 7 years ago

@Antaris not disagreeing that yours is a valid scenario for this feature but could you elaborate more on the motivation? You said it is "purely for lookup efficiency" but my understanding is that this hinders searchability to make persistence and retrieval potentially more efficient, e.g. with a regular index it is not possible to resolve WHERE [Tags] LIKE '%|MyTag|%', but once you apply this pattern it is no longer necessary to update and query an additional table (or tables) for the tags.

Re LINQ, as you mentioned, it becomes more complicated. Two possible general approaches:

  1. Stake in the ground: When you do something like this you are effectively opting out of searchability, so evaluating a predicate that refers to Tags on the client (or failing if client evaluation is disabled) may be good enough. It will be up to you to make sure that your queries contain predicates on criteria other than Tags so you will never load the whole table into memory.
  2. The mapping transformation (or a version of it) could be represented in a LINQ expression rather than regular code. References to Tags in a query could then be expanded by inlining that expression before we attempt to compute the SQL translation.
Antaris commented 7 years ago

@divega

Hmmm, I hadn't thought of that :-/ very good point on the indexes front. From previous experience I've found the flat list vs many-2-many tables setup to be more efficient in terms of manageability, hadn't really considered T-SQL index performance. I'll have to think about it and see about potential impacts. I guess I'd need to weigh up searchability vs storage efficiency :-/

Unless of course, I merge the functionality for having tags, in both a flat list -> TagSet mapping, AND many-to-many relationship tables, the former for efficient reading, and the latter for searching. GIven the size of the data sets i might be working with, it might be an option. I'd have to manage the mapping tables when saving the parent entitities :-/

Antaris commented 7 years ago

@divega Out of interest I did some proof of concept work by fiddling with some of the internal API, and I got it working for the materialization point of view.

https://gist.github.com/Antaris/ca0f271311fdb84ef3a8db95b149a307

But, because my struct isn't immutable, the state manager doesn't pick up the change.

After playing around with it, I hacked in an ISnapshotable<T> interface which essentially cloned the value for the snapshot and then it works, but couldn't do that through a replaceable EF service (had to mod SnapshotFactoryFactory but as a proof of concept - quite happy with it.

Next stop - query expressions!

Antaris commented 7 years ago

Updated my Gist, got expressions working but was definately tricky.

My TagSet implements ICollection<TagSet> and the ICollection<T>.Contains method is special cased in re-linq as a result operator (unlike string.Contains), so had to rename my method Exists and created a custom method translator for it.

Seems to work nicely.

bjorn-ali-goransson commented 7 years ago

Since this doesn't seem to be on the roadmap, I'd like to stop waiting and ask is this functionality available in any other framework?

EF6? (rather not but ...) Maybe Hibernate.NET?

marchy commented 7 years ago

Guys this should be prioritized. The whole point of an ORM is to Map to Objects... and EF has traditionally stood out particularly due to its ability to map to POCOs / domain representations of objects.

This means removing the friction of relational mapping and the ability to hide the complexity. Today there is still quite a LOT of friction in this and the reality is quite a ways from the promise/vision.

I know the team's been really focused on re-creating EF Core for modern computing paradigms (cloud, lightweight client, mobile – still missing, though really who cares – Realm has you beat)... but I feel it's really time to get back to new feature growth as opposed to catch-up/parity with EF6.

EF's biggest benefits are that of productivity by removing the object-relational friction / impedance mismatch and custom mapping would allow the framework to much better deliver on its promise.

Would love to see the team bite off this big, ambitious chunk and put the framework on an exciting path against the alternatives!

keepupthegreatwork

hidegh commented 7 years ago

hope flexible mapping also includes the possibility to totally bypass (turn off, remove) the property auto mapping feature. is this possible at the current stage?

if i'm having an o-o class and i'm extending it with properties i don't wanna write to DB, i need to go tot the mapping class and list there a property i don't want to map just to get it ignored.

marchy commented 7 years ago

@hidegh Agreed that would be super helpful. With the amount of limitations to EF object mapping we have a slew of properties outside of what's persisted. Would be much better to opt in to persisting them than opt out for everything else.

On that note, computed/get-only properties should not need to ever be explicitly ignored - by definition they can never persist lol. This should be automatically ignored by EF but right now you have to specify to ignore those too.

ajcvickers commented 7 years ago

@marchy If you are finding read-only properties (i.e. properties without a setter) being mapped automatically, then can you please file a new issue with a repro. That should not be happening.

hidegh commented 7 years ago

@bjorn-ali-goransson i still wote for nhibernate. it's more mature and had features from 2007 that EF6 still has not. not to mention that it's architecture must be cleaner, cause a 3rd party fluent mapping api to NH is more powerful that the original one in EF6.

Actually I did a test (last week) and here's the result:

EF6 - due to convention I can do a protected internal virtual property mapping (with underscore prefix) and can automate to generate clean column names - but all the way how this is solved with conventions is too primitive (not as intuitive as with fluent NH). even collection encapsulation can be solved. so almost a hit, almost, because while using IQueryable, for includes you need to use not the main property but the "backing" one - and this is not an intuitive way to work with IQueryable.

EF Core - now it seems they don't map fields by default (but need to re-verify). anyhow it seemed perfect. I got the feeling of it, started to like it despite not having real conventions, missing NxN, group by not on the database level... Then came the big suprise what the missing lazy load feature means - it means problems, real big problems that will force you into debug. It's not the lazy load what I miss, cause eager loading is a must with ORM. But when using NH/EF6 and not having DB connection and a lazy-load proxy is hit, guess what happens: right, you got an exception (so actually the missing lazy load meas missing exceptions). Now EF Core does not use proxies (nor custom nor 3rd parties). So anything not eager loaded is NULL or an empty collection. So EF Core will never complain that you are accessing something that is not loaded, it simply serves you with stupid values (nulls and empty collections). If you think this is not a big issue, ask people who had similar issue with Linq2sql and wasted productive hours to find a bug...

Despite that in 2012 the main team left NHibernate, now it's managed still, but there are less frequent releases, it's still in a beter shape than any of the existings EF's backed by Microsoft. And just to be mean a bit: a framework, not production ready (perf. issues due group by, the mentioned lazy load issue), how can have a version number above or equal 1.0?

If someone asks me, I always tell this: to NH you got a nice manual how to do things...to EF you get documents on how to hack it, so that you get a near to ORM feeling of it. Sorry guys, EF Core might once be a nice product, but in v1.1 it is still not, despite having an open sourced ORM there for more than a decade you could learn from...

jnm2 commented 7 years ago

@hidegh NHibernate not having any async support is a dealbreaker for me.

hidegh commented 7 years ago

@jnm2 a clean code will keep cost of quality low (less bugs, easier to maintain, easier to extend). considering prices for a developer vs. price for better hardware... btw. if you need performance, there's nothing wrong to have NHibernate for the command and views and EF (even database-first) with async for the read model.

jnm2 commented 7 years ago

I'm most familiar with writing desktop clients, and from personal experience, you need async saves. Otherwise you end up with UI lag or thread safety issues to work around. Still a dealbreaker for me.

bjorn-ali-goransson commented 7 years ago

NHibernate seems quite "dead", no?

I'll check out raven db while waiting for this.

lör 18 feb. 2017 kl. 15:09 skrev Joseph Musser notifications@github.com:

I'm most familiar with writing desktop clients, and from personal experience, you need async saves. Otherwise you end up with UI lag or thread safety issues to work around. Still a dealbreaker for me.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/aspnet/EntityFramework/issues/240#issuecomment-280847821, or mute the thread https://github.com/notifications/unsubscribe-auth/AAoyAAxfl0LzwYZKTiKKkRViRdkzBakVks5rdvuIgaJpZM4B9gcA .

andez2000 commented 6 years ago

Any progress on using readonly fields and properties with Entity Framework. Currently I just want to pass in a Guid into the constructor which is stored in a readonly property ala #10400.

Maybe we could have a Materializer which could feed in mappings into some OnCreate where we could decide which constructor to call to materialize our entity?

class InvoiceEntityTypeConfiguration : IEntityTypeConfiguration<Invoice>
{
    public void Configure(EntityTypeBuilder<Invoice> builder)
    {
        builder.ToTable("Invoices", "dbo");
        builder.Materializer.OnCreate((PropertyMap pm) =>
        {
            return new Invoice(pm.Id, pm.Number);
        });
    }
}

Probably a dumb thought, but these kind of design decisions is killing me. I don't want to duplicate an entity into its own state type if I can help it.

smarts commented 5 years ago

Which one of the issues in this issue's description handles support for unsigned integer types?

ajcvickers commented 5 years ago

@smarts Unsigned integer types are supported as of EF Core 2.1.

smarts commented 5 years ago

@ajcvickers is there documentation on how to use this feature? My team recently tried to use them and was getting an exception that EF Core was expecting Int32 instead of UInt32. Do you know if this feature works for shadow properties too? Also, does it work for any DB integer type? Our specific use-cases are SQL Server and PostgreSQL

ajcvickers commented 5 years ago

@smarts Some databases natively support storing unsigned types, in which case they should behave like any other property. Some databases, like SQL Server, don't supported unsigned types, but EF should automatically perform a value conversion if you have such a type in your model. In this case the value is stored in the database as a bigger signed type, by default.

If you have an entity type with unsigned properties and you are seeing errors, then please file a new issue with a small, runnable project/solution of complete code listing that demonstrates the behavior you are seeing.

smarts commented 5 years ago

@ajcvickers sorry for the delay in responding.

In this case the value is stored in the database as a bigger signed type, by default.

Does that mean that using UInt32 for an entity's property will only work if the DB type for the corresponding column supports the full range of values for UInt32? I.e., we want to use INT in the DB and UInt32 on the entity. While I understand that this conversion is generally bad, in the specific case of [generated] IDs the risk becomes negligible. Admittedly, this is based on my minimal experience with SQL DBs, so I apologize if this assumption is incorrect. Is it possible to allow this generally-risky conversion for the negligible-risk scenario of IDs via some custom EF Convention?

roji commented 5 years ago

@smarts take a look at value converters - you can set up lossy value conversions, but EF will not that automatically for you (as it does with uint->long). However, lossy value conversions may get a bit tricky in some scenarios, and are best avoided if you have an easy lossless alternative (such as long).

markusschaber commented 5 years ago

@roji As far as I can see, conversion between UInt32 and Int32 is lossless if you use "C-Style" casts. Negative int32 values are mapped to "high" unsigned values and back, one just should be careful when interpreting the converted values (e. G. ordering is different). SQL Server, PostgreSQL and MySQL provide "int" as a 32 bit signed data type, so trying to store an uint32 there via C-Style cast looks feasible. (I didn't check other DBs).

roji commented 5 years ago

@markusschaber you're technically right: if your value converter does a simple cast in C#, then uint values which are larger than int.MaxValue will show up as negatives. While this is technically lossless and would work, as you say there are some tricky caveats.

Given that databases typically have a larger bigint type which can contain all uint values, it generally really makes sense to convert to that - trying to save 4 bytes isn't going to be justifiable in most cases.

smarts commented 5 years ago

@markusschaber i'm not sure if you noticed, but i'm specifically talking about auto-generated IDs in the database. Using long for the entity's property type has the same problem (allowing negative values), which isn't what I want. Also, as far as large values… since the value only ever comes from the DB (i.e., not set by any C# code outside of EF Core reading DB values), it will never be larger than Int32.MaxValue (because INT is the DB type).

ajcvickers commented 5 years ago

Just to close the loop on this. When deciding how to convert automatically, we look at:

The second point is why we convert uint to long, and not uint to int, since for the latter, if the uint does have a value that can't "fit" it will still be stored, but it will be interpreted as negative by the database and hence will have a different ordering.

That being said, it's safe to convert uint to int if you know that the domain space for your uints can fit in an int, or if a specific ordering on the database is not needed. One way to do it is:

modelBuilder.Entity<Foo>().Property(e => e.MyUInt).HasConversion<int>();

Finally, there are currently some limitations for conversion on keys: see #11597

smitpatel commented 5 years ago

@ajcvickers - What exactly needs-cleanup label is?

voroninp commented 3 years ago

Proposed for 6.0

Does it mean not earlier than November 2021?

roji commented 3 years ago

@voroninp yes, 6.0 would mean November 2021. Previews would make this available much earlier, though.

atrauzzi commented 3 years ago

I'm just here to say I've been waiting 6 years and counting for polymorphic relations using discriminators. I think it's more than made a case for itself from the digging I've been doing here.


Something along the lines of:

Integration IIntegratable User : IIIntegratable Organization : IIntegratable

Therefore, the table integration in the database gets a integratable_type and integratable_id column, then I could go:

integration.Integratable...; // (returns a IIntegratable that can be passed through pattern matching to detect concrete type)
// and of course:
user.Integrations...; // (returns a list of `Integration` instances)