dotnet / efcore

EF Core is a modern object-database mapper for .NET. It supports LINQ queries, change tracking, updates, and schema migrations.
https://docs.microsoft.com/ef/
MIT License
13.79k stars 3.19k forks source link

SQL Server: Optimize SQL Server OUTPUT clause usage when retrieving database-generated values #27372

Closed roji closed 2 years ago

roji commented 2 years ago

When inserting an entity with database-generated columns, we currently generate the following (as long as there's no IDENTITY column):

DECLARE @inserted0 TABLE ([Id] int);
INSERT INTO [Blogs] ([Name]) OUTPUT INSERTED.[Id] INTO @inserted0 VALUES (@p0);
SELECT [i].[Id] FROM @inserted0 i;

This could be simplified into this:

INSERT INTO [Blogs] ([Name]) OUTPUT INSERTED.[Id] VALUES (@p0);

The roundabout through the inserted0 TVP is probably because the OUTPUT clause won't work if there's a trigger defined, unless it's an OUTPUT INTO (??) (@AndriySvyryd it this right, any more context?).

Unfortunately, using OUTPUT INTO instead of OUTPUT adds a lot of overhead:

// * Summary *

BenchmarkDotNet=v0.13.0, OS=ubuntu 21.10
Intel Xeon W-2133 CPU 3.60GHz, 1 CPU, 12 logical and 6 physical cores
.NET SDK=6.0.101
  [Host]     : .NET 6.0.1 (6.0.121.56705), X64 RyuJIT
  DefaultJob : .NET 6.0.1 (6.0.121.56705), X64 RyuJIT

|     Method |     Mean |     Error |    StdDev | Ratio | RatioSD |
|----------- |---------:|----------:|----------:|------:|--------:|
|   NoOutput | 2.119 ms | 0.0423 ms | 0.0396 ms |  0.39 |    0.01 |
|     Output | 1.926 ms | 0.0382 ms | 0.0497 ms |  0.36 |    0.02 |
| OutputInto | 5.365 ms | 0.1068 ms | 0.2083 ms |  1.00 |    0.00 |

That's over 3ms just for passing through a TVP! Also, mysteriously the version with no OUTPUT clause at all performs worse...

Remarks:

Benchmark code ```c# BenchmarkRunner.Run(); public class SequenceBenchmark { const string ConnectionString = "Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0;Encrypt=false"; private SqlConnection _connection; [GlobalSetup] public async Task Setup() { _connection = new SqlConnection(ConnectionString); await _connection.OpenAsync(); await using var cmd = new SqlCommand(@" DROP TABLE IF EXISTS [Foo]; DROP SEQUENCE IF EXISTS [FooSeq]; CREATE SEQUENCE [FooSeq] AS int START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE; CREATE TABLE [Foo] ( [Id] int PRIMARY KEY NOT NULL DEFAULT (NEXT VALUE FOR FooSeq), [BAR] int );", _connection); await cmd.ExecuteNonQueryAsync(); } [Benchmark] public async Task NoOutput() { await using var cmd = new SqlCommand("INSERT INTO [Foo] ([Bar]) VALUES (8)", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark] public async Task Output() { await using var cmd = new SqlCommand("INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] VALUES (8)", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark(Baseline = true)] public async Task OutputInto() { await using var cmd = new SqlCommand(@"DECLARE @inserted TABLE ([Id] int); INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] INTO @inserted VALUES (8); SELECT [i].[Id] FROM @inserted i; ", _connection); _ = await cmd.ExecuteScalarAsync(); } [GlobalCleanup] public ValueTask Cleanup() => _connection.DisposeAsync(); } ```
ajcvickers commented 2 years ago

See also #1441, #25345, and #7188.

roji commented 2 years ago

Thanks @ajcvickers, I was looking into IDENTITY as well. Reopened #7188 to revisit that pattern as well.

ajcvickers commented 2 years ago

See also:

roji commented 2 years ago

@AndriySvyryd's suggestion of trying a memory-optimized TVP does reduce the time to something much closer to regular OUTPUT. However:

Method Mean Error StdDev Ratio RatioSD
NoOutput 1.905 ms 0.0376 ms 0.0649 ms 0.38 0.02
Output 1.785 ms 0.0354 ms 0.0407 ms 0.36 0.01
OutputInto 4.985 ms 0.0983 ms 0.2136 ms 1.00 0.00
OutputInto_MemoryOptimized_persistent_type 1.935 ms 0.0378 ms 0.0518 ms 0.39 0.02
OutputInto_MemoryOptimized_transient_type 610.737 ms 11.9433 ms 19.2862 ms 122.22 6.18

Note: I also benchmarked using ExecuteReaderAsync instead of ExecuteScalarAsync, and there was no difference.

Benchmark code ```c# BenchmarkRunner.Run(); public class SequenceBenchmark { const string ConnectionString = "Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0;Encrypt=false"; private SqlConnection _connection; [GlobalSetup] public async Task Setup() { _connection = new SqlConnection(ConnectionString); await _connection.OpenAsync(); await using var cmd = new SqlCommand(@" DROP TYPE IF EXISTS [tempTableType]; DROP TABLE IF EXISTS [Foo]; DROP SEQUENCE IF EXISTS [FooSeq]; CREATE SEQUENCE [FooSeq] AS int START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE; CREATE TABLE [Foo] ( [Id] int PRIMARY KEY NOT NULL DEFAULT (NEXT VALUE FOR FooSeq), [BAR] int ); CREATE TYPE [tempTableType] AS TABLE ([Id] int PRIMARY KEY NONCLUSTERED) WITH (MEMORY_OPTIMIZED = ON)", _connection); await cmd.ExecuteNonQueryAsync(); } [Benchmark] public async Task NoOutput() { await using var cmd = new SqlCommand("INSERT INTO [Foo] ([Bar]) VALUES (8)", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark] public async Task Output() { await using var cmd = new SqlCommand("INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] VALUES (8)", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark(Baseline = true)] public async Task OutputInto() { await using var cmd = new SqlCommand(@"DECLARE @inserted TABLE ([Id] int); INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] INTO @inserted VALUES (8); SELECT [i].[Id] FROM @inserted i; ", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark] public async Task OutputInto_MemoryOptimized_persistent_type() { await using var cmd = new SqlCommand(@"DECLARE @inserted [tempTableType]; INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] INTO @inserted VALUES (8); SELECT [i].[Id] FROM @inserted i; ", _connection); _ = await cmd.ExecuteScalarAsync(); } [Benchmark] public async Task OutputInto_MemoryOptimized_transient_type() { await using var cmd = new SqlCommand(@"CREATE TYPE [transientTempTableType] AS TABLE ([Id] int PRIMARY KEY NONCLUSTERED) WITH (MEMORY_OPTIMIZED = ON) DECLARE @inserted [tempTableType]; INSERT INTO [Foo] ([Bar]) OUTPUT INSERTED.[Id] INTO @inserted VALUES (8); SELECT [i].[Id] FROM @inserted i; DROP TYPE [transientTempTableType]; ", _connection); _ = await cmd.ExecuteScalarAsync(); } [GlobalCleanup] public ValueTask Cleanup() => _connection.DisposeAsync(); } ```
ErikEJ commented 2 years ago

Why are you testning with SEQUENCE ? No one uses that, not even migrations conventions.

roji commented 2 years ago

@ErikEJ it doesn't really matter, it can be any sort of HasDefaultValue/HasDefaultValueSql. It's just that for IDENTITY specifically EF Core uses a different mechanism to bring back the value, so I preferred not to benchmark that.

Basically anyone not using IDENTITY, but having a database-generated column is affected by this. Ths above is also specific to inserting few rows - when batching many rows we use MERGE. A similar thing may exist there, but more research is needed.

ErikEJ commented 2 years ago

Seems like quite a narrow audience. For me it is usually either IDENTITY or a client side generated guid.

roji commented 2 years ago

Maybe you're right, though if you have a guid key plus some other generated column, you're affected by this too.

I'm looking at the batching version too (with MERGE), but we may end up deciding it's not worth addressing this.

ErikEJ commented 2 years ago

Maybe you're right, though if you have a guid key plus some other generated column, your affected by this too.

Yeah, but again a very uncommon edge case scenario, in my experience.

roji commented 2 years ago

I've taken a look at our MERGE usage, which also uses OUTPUT INTO. Here are some ideas:

The above are just ideas at this point... I do believe that 3ms are a pretty huge difference though.

Benchmark results

Method Rows Mean Error StdDev Median Ratio RatioSD
Merge_with_OutputInto 1 5.263 ms 0.1043 ms 0.1984 ms 5.260 ms 1.00 0.00
Merge_with_Output 1 1.891 ms 0.0376 ms 0.0916 ms 1.898 ms 0.36 0.02
BatchedInserts 1 1.836 ms 0.0364 ms 0.0907 ms 1.856 ms 0.35 0.02
Inserts 1 1.835 ms 0.0442 ms 0.1296 ms 1.872 ms 0.34 0.03
Merge_with_OutputInto 2 5.370 ms 0.1063 ms 0.2048 ms 5.403 ms 1.00 0.00
Merge_with_Output 2 2.009 ms 0.0376 ms 0.0351 ms 2.007 ms 0.37 0.01
BatchedInserts 2 3.260 ms 0.0474 ms 0.0443 ms 3.257 ms 0.60 0.02
Inserts 2 3.871 ms 0.0741 ms 0.1015 ms 3.891 ms 0.72 0.03
Merge_with_OutputInto 3 5.555 ms 0.1094 ms 0.1534 ms 5.547 ms 1.00 0.00
Merge_with_Output 3 2.041 ms 0.0352 ms 0.0329 ms 2.036 ms 0.37 0.01
BatchedInserts 3 4.580 ms 0.0868 ms 0.0928 ms 4.584 ms 0.83 0.03
Inserts 3 5.741 ms 0.1147 ms 0.2856 ms 5.817 ms 1.02 0.08
Merge_with_OutputInto 4 5.451 ms 0.1085 ms 0.1900 ms 5.472 ms 1.00 0.00
Merge_with_Output 4 2.050 ms 0.0406 ms 0.0839 ms 2.068 ms 0.38 0.02
BatchedInserts 4 5.772 ms 0.1146 ms 0.2038 ms 5.803 ms 1.06 0.06
Inserts 4 7.755 ms 0.1079 ms 0.1009 ms 7.762 ms 1.41 0.06
Merge_with_OutputInto 5 5.518 ms 0.1086 ms 0.2066 ms 5.554 ms 1.00 0.00
Merge_with_Output 5 2.090 ms 0.0346 ms 0.0324 ms 2.090 ms 0.38 0.02
BatchedInserts 5 7.037 ms 0.1386 ms 0.2832 ms 7.094 ms 1.28 0.07
Inserts 5 9.639 ms 0.1890 ms 0.3641 ms 9.702 ms 1.75 0.10
Merge_with_OutputInto 6 5.606 ms 0.1120 ms 0.2184 ms 5.600 ms 1.00 0.00
Merge_with_Output 6 2.140 ms 0.0295 ms 0.0276 ms 2.134 ms 0.38 0.02
BatchedInserts 6 8.320 ms 0.1641 ms 0.3314 ms 8.399 ms 1.49 0.08
Inserts 6 11.734 ms 0.1766 ms 0.1652 ms 11.735 ms 2.09 0.11
Merge_with_OutputInto 7 5.676 ms 0.1134 ms 0.2490 ms 5.747 ms 1.00 0.00
Merge_with_Output 7 2.175 ms 0.0423 ms 0.0671 ms 2.192 ms 0.38 0.02
BatchedInserts 7 9.724 ms 0.1851 ms 0.2131 ms 9.749 ms 1.71 0.09
Inserts 7 13.390 ms 0.2642 ms 0.5574 ms 13.497 ms 2.37 0.14
Merge_with_OutputInto 8 5.707 ms 0.1131 ms 0.2152 ms 5.735 ms 1.00 0.00
Merge_with_Output 8 2.218 ms 0.0442 ms 0.0739 ms 2.233 ms 0.39 0.02
BatchedInserts 8 10.807 ms 0.2128 ms 0.2277 ms 10.773 ms 1.92 0.07
Inserts 8 15.381 ms 0.2590 ms 0.2423 ms 15.332 ms 2.75 0.13
Merge_with_OutputInto 9 5.773 ms 0.1149 ms 0.2398 ms 5.833 ms 1.00 0.00
Merge_with_Output 9 2.245 ms 0.0444 ms 0.0494 ms 2.258 ms 0.39 0.02
BatchedInserts 9 12.031 ms 0.2375 ms 0.5010 ms 12.149 ms 2.09 0.12
Inserts 9 17.188 ms 0.3232 ms 0.3023 ms 17.189 ms 3.01 0.13
Merge_with_OutputInto 10 5.860 ms 0.1160 ms 0.2235 ms 5.958 ms 1.00 0.00
Merge_with_Output 10 2.300 ms 0.0413 ms 0.0366 ms 2.295 ms 0.40 0.02
BatchedInserts 10 13.592 ms 0.2710 ms 0.4219 ms 13.617 ms 2.34 0.12
Inserts 10 19.513 ms 0.3207 ms 0.3000 ms 19.528 ms 3.39 0.20
Benchmark code ```c# BenchmarkRunner.Run(); public class SequenceBenchmark { const string ConnectionString = "Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0;Encrypt=false"; private SqlCommand _command; private SqlConnection _connection; [Params(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)] public int Rows { get; set; } private async Task Setup() { _connection = new SqlConnection(ConnectionString); await _connection.OpenAsync(); await using var cmd = new SqlCommand(@" DROP TABLE IF EXISTS [Foo]; DROP SEQUENCE IF EXISTS [FooSeq]; CREATE SEQUENCE [FooSeq] AS int START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE; CREATE TABLE [Foo] ( [Id] int PRIMARY KEY NOT NULL DEFAULT (NEXT VALUE FOR FooSeq), [BAR] int );", _connection); await cmd.ExecuteNonQueryAsync(); } [GlobalSetup(Target = nameof(Merge_with_OutputInto))] public async Task Setup_Merge_with_OutputInto() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"DECLARE @inserted0 TABLE ([Id] int, [_Position] [int]); MERGE [Blogs] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i}, {i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, i)); } builder.Append(@") AS i ([Name], _Position) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Name]) VALUES (i.[Name]) OUTPUT INSERTED.[Id], i._Position INTO @inserted0; SELECT [i].[Id] FROM @inserted0 i ORDER BY [i].[_Position];"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Merge_with_Output))] public async Task Setup_Merge_with_Output() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"MERGE [Blogs] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, i)); } builder.Append(@") AS i ([Name]) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Name]) VALUES (i.[Name]) OUTPUT INSERTED.[Id];"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(BatchedInserts))] public async Task Setup_BatchInserts() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); for (var i = 0; i < Rows; i++) { if (i > 0) builder.Append("; "); builder.Append($"INSERT INTO [Blogs] ([Name]) VALUES (@p{i})"); _command.Parameters.Add(new("p" + i, i)); } _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Inserts))] public async Task Setup_Inserts() { await Setup(); _command = new SqlCommand("INSERT INTO [Blogs] ([Name]) VALUES (@p0)", _connection); _command.Parameters.Add(new SqlParameter { ParameterName = "p0" }); } [Benchmark(Baseline = true)] public async Task Merge_with_OutputInto() => _ = await _command.ExecuteScalarAsync(); [Benchmark] public async Task Merge_with_Output() => _ = await _command.ExecuteScalarAsync(); [Benchmark] public async Task BatchedInserts() => _ = await _command.ExecuteScalarAsync(); [Benchmark] public async Task Inserts() { for (var i = 0; i < Rows; i++) { _command.Parameters[0].Value = i; _ = await _command.ExecuteScalarAsync(); } } [GlobalCleanup] public ValueTask Cleanup() => _connection.DisposeAsync(); } ```
ajcvickers commented 2 years ago

@roji

If we're getting the key(s) column, we could use that to look up the entity instance at the client (i.e. not care about the ordering).

So, for generated keys, you mean pass in the temporary values for all the key columns, and then read them back out correlated with the real values?

roji commented 2 years ago

No need to pass them in - just ask for them to be sent back to you via the OUTPUT clause. Yeah, this adds more traffic, but given the extreme overhead of OUTPUT INTO there's a good chance it's justified.

(note also the very common scenario where the ID itself is database-generated, so must be in the OUTPUT clause in any case).

roji commented 2 years ago

One additional note from the above: if we did switch to MERGE with OUTPUT instead of OUTPUT INTO, it becomes an efficient strategy starting with 1 row, so we could drop the INSERT/MERGE threshold and just always do MERGE (of course the above caveats still hold).

ajcvickers commented 2 years ago

But how can we look up the entity for which to set the generated ID when we don't know the ID until it is generated?

roji commented 2 years ago

Oops, you're right - I got tangled up here between the various scenarios. Will think about this some more and post something later.

roji commented 2 years ago

We still have the option of using MERGE with the "fake" position column, but with simple OUTPUT instead of OUTPUT INTO:

MERGE [Foo] USING (
    VALUES (100, 0),
           (101, 1),
           (102, 2),
           (103, 3),
           (104, 4),
           (105, 5),
           (106, 6),
           (107, 7),
           (108, 8),
           (109, 9)) AS i ([BAR], _Position) ON 1=0
WHEN NOT MATCHED THEN
    INSERT ([BAR])
    VALUES (i.[BAR])
    OUTPUT INSERTED.[Id], i._Position;

That precludes doing a server-side ORDER BY (as we do today with OUTPUT INTO), and the ordering isn't guaranteed. But at the client we could look up the entity instance by its position, and propagate the server-generated values. The cost here is the transfer of the additional position value to the client (per row), weighed against the 3x perf hit of using OUTPUT INTO. Though it seems pretty clear which side this goes perf-wise, I'll do a benchmark later and post it.

roji commented 2 years ago

See below for another, more complete benchmark run. tl;dr doing OUTPUT with position has the same perf as OUTPUT without it.

Method Rows Mean Error StdDev Median Ratio RatioSD
Merge_with_OutputInto 1 5.419 ms 0.1079 ms 0.2521 ms 5.478 ms 1.00 0.00
Merge_with_Output 1 1.889 ms 0.0374 ms 0.0747 ms 1.909 ms 0.35 0.02
Merge_with_Output_with_position 1 1.909 ms 0.0379 ms 0.0756 ms 1.922 ms 0.35 0.03
Merge_without_Output 1 1.865 ms 0.0369 ms 0.0802 ms 1.883 ms 0.34 0.02
Insert_row_values_without_Output 1 1.885 ms 0.0373 ms 0.0559 ms 1.894 ms 0.35 0.03
BatchedInserts 1 1.886 ms 0.0376 ms 0.0743 ms 1.899 ms 0.35 0.03
Inserts 1 1.874 ms 0.0374 ms 0.0804 ms 1.890 ms 0.35 0.03
Insert_and_Select_batched 1 1.971 ms 0.0394 ms 0.1003 ms 1.997 ms 0.36 0.03
Insert_and_Select 1 2.018 ms 0.0305 ms 0.0285 ms 2.016 ms 0.38 0.03
Merge_with_OutputInto 2 5.538 ms 0.1104 ms 0.3238 ms 5.620 ms 1.00 0.00
Merge_with_Output 2 1.970 ms 0.0375 ms 0.0401 ms 1.973 ms 0.36 0.04
Merge_with_Output_with_position 2 1.987 ms 0.0397 ms 0.0802 ms 2.002 ms 0.37 0.03
Merge_without_Output 2 1.884 ms 0.0376 ms 0.0751 ms 1.903 ms 0.35 0.03
Insert_row_values_without_Output 2 1.877 ms 0.0368 ms 0.0528 ms 1.890 ms 0.35 0.04
BatchedInserts 2 3.174 ms 0.0552 ms 0.0516 ms 3.175 ms 0.58 0.07
Inserts 2 3.743 ms 0.0493 ms 0.0461 ms 3.733 ms 0.69 0.08
Insert_and_Select_batched 2 3.304 ms 0.0610 ms 0.0571 ms 3.301 ms 0.61 0.08
Insert_and_Select 2 3.907 ms 0.0762 ms 0.1624 ms 3.936 ms 0.72 0.07
Merge_with_OutputInto 3 5.374 ms 0.1067 ms 0.2636 ms 5.425 ms 1.00 0.00
Merge_with_Output 3 1.925 ms 0.0372 ms 0.0398 ms 1.931 ms 0.36 0.02
Merge_with_Output_with_position 3 1.918 ms 0.0378 ms 0.0853 ms 1.942 ms 0.36 0.03
Merge_without_Output 3 1.898 ms 0.0371 ms 0.0716 ms 1.911 ms 0.36 0.02
Insert_row_values_without_Output 3 1.884 ms 0.0375 ms 0.0740 ms 1.899 ms 0.35 0.02
BatchedInserts 3 4.397 ms 0.0878 ms 0.1909 ms 4.427 ms 0.82 0.05
Inserts 3 5.635 ms 0.0628 ms 0.0588 ms 5.615 ms 1.06 0.06
Insert_and_Select_batched 3 4.554 ms 0.0897 ms 0.1618 ms 4.594 ms 0.86 0.06
Insert_and_Select 3 5.876 ms 0.1172 ms 0.1605 ms 5.871 ms 1.11 0.08
Merge_with_OutputInto 4 5.407 ms 0.1078 ms 0.3178 ms 5.460 ms 1.00 0.00
Merge_with_Output 4 1.935 ms 0.0249 ms 0.0221 ms 1.934 ms 0.37 0.04
Merge_with_Output_with_position 4 1.935 ms 0.0386 ms 0.0724 ms 1.950 ms 0.36 0.03
Merge_without_Output 4 1.897 ms 0.0375 ms 0.0713 ms 1.914 ms 0.35 0.03
Insert_row_values_without_Output 4 1.914 ms 0.0231 ms 0.0216 ms 1.918 ms 0.36 0.04
BatchedInserts 4 5.665 ms 0.1116 ms 0.1833 ms 5.701 ms 1.06 0.10
Inserts 4 7.394 ms 0.1472 ms 0.3382 ms 7.476 ms 1.36 0.10
Insert_and_Select_batched 4 5.874 ms 0.1169 ms 0.2224 ms 5.908 ms 1.09 0.09
Insert_and_Select 4 7.871 ms 0.1536 ms 0.2251 ms 7.909 ms 1.48 0.13
Merge_with_OutputInto 5 5.515 ms 0.1099 ms 0.2064 ms 5.572 ms 1.00 0.00
Merge_with_Output 5 1.938 ms 0.0385 ms 0.0803 ms 1.956 ms 0.35 0.02
Merge_with_Output_with_position 5 1.956 ms 0.0387 ms 0.0737 ms 1.973 ms 0.36 0.02
Merge_without_Output 5 1.910 ms 0.0376 ms 0.0668 ms 1.928 ms 0.35 0.02
Insert_row_values_without_Output 5 1.889 ms 0.0376 ms 0.1059 ms 1.922 ms 0.34 0.03
BatchedInserts 5 6.878 ms 0.1150 ms 0.1076 ms 6.897 ms 1.26 0.06
Inserts 5 9.222 ms 0.1821 ms 0.4363 ms 9.319 ms 1.67 0.11
Insert_and_Select_batched 5 7.220 ms 0.1433 ms 0.2547 ms 7.264 ms 1.31 0.08
Insert_and_Select 5 9.879 ms 0.1751 ms 0.1638 ms 9.824 ms 1.81 0.08
Merge_with_OutputInto 8 5.508 ms 0.1094 ms 0.2724 ms 5.559 ms 1.00 0.00
Merge_with_Output 8 1.984 ms 0.0391 ms 0.0724 ms 2.002 ms 0.36 0.03
Merge_with_Output_with_position 8 1.993 ms 0.0385 ms 0.0551 ms 1.999 ms 0.37 0.03
Merge_without_Output 8 1.966 ms 0.0392 ms 0.0755 ms 1.983 ms 0.36 0.02
Insert_row_values_without_Output 8 1.958 ms 0.0387 ms 0.0745 ms 1.970 ms 0.36 0.02
BatchedInserts 8 10.718 ms 0.2125 ms 0.4244 ms 10.772 ms 1.96 0.16
Inserts 8 15.107 ms 0.1686 ms 0.1577 ms 15.134 ms 2.79 0.29
Insert_and_Select_batched 8 11.208 ms 0.2221 ms 0.3586 ms 11.254 ms 2.06 0.17
Insert_and_Select 8 15.714 ms 0.1691 ms 0.1412 ms 15.726 ms 2.92 0.32
Merge_with_OutputInto 10 5.627 ms 0.1115 ms 0.2518 ms 5.673 ms 1.00 0.00
Merge_with_Output 10 2.027 ms 0.0398 ms 0.0653 ms 2.040 ms 0.37 0.03
Merge_with_Output_with_position 10 2.036 ms 0.0397 ms 0.0706 ms 2.050 ms 0.37 0.02
Merge_without_Output 10 1.997 ms 0.0398 ms 0.0717 ms 2.013 ms 0.36 0.02
Insert_row_values_without_Output 10 2.007 ms 0.0400 ms 0.0771 ms 2.020 ms 0.36 0.02
BatchedInserts 10 13.612 ms 0.2703 ms 0.5701 ms 13.694 ms 2.42 0.17
Inserts 10 18.758 ms 0.3679 ms 0.7919 ms 18.789 ms 3.34 0.25
Insert_and_Select_batched 10 13.594 ms 0.2708 ms 0.7181 ms 13.759 ms 2.42 0.18
Insert_and_Select 10 19.665 ms 0.2528 ms 0.2365 ms 19.629 ms 3.63 0.29
Merge_with_OutputInto 50 6.041 ms 0.1199 ms 0.3116 ms 6.114 ms 1.00 0.00
Merge_with_Output 50 2.679 ms 0.0532 ms 0.1519 ms 2.710 ms 0.44 0.04
Merge_with_Output_with_position 50 2.737 ms 0.0591 ms 0.1742 ms 2.781 ms 0.46 0.04
Merge_without_Output 50 2.608 ms 0.0508 ms 0.0728 ms 2.617 ms 0.43 0.03
Insert_row_values_without_Output 50 2.516 ms 0.0502 ms 0.1313 ms 2.536 ms 0.42 0.03
BatchedInserts 50 62.874 ms 1.2480 ms 2.4924 ms 63.480 ms 10.43 0.89
Inserts 50 92.748 ms 1.4220 ms 1.3302 ms 92.877 ms 15.48 0.91
Insert_and_Select_batched 50 65.646 ms 1.2921 ms 2.8631 ms 66.162 ms 10.89 0.86
Insert_and_Select 50 96.867 ms 1.9522 ms 5.7255 ms 97.789 ms 16.04 1.47
Merge_with_OutputInto 100 6.678 ms 0.1324 ms 0.3488 ms 6.736 ms 1.00 0.00
Merge_with_Output 100 3.852 ms 0.0757 ms 0.0743 ms 3.855 ms 0.57 0.03
Merge_with_Output_with_position 100 3.774 ms 0.0751 ms 0.1841 ms 3.780 ms 0.57 0.05
Merge_without_Output 100 3.681 ms 0.0707 ms 0.0786 ms 3.682 ms 0.55 0.04
Insert_row_values_without_Output 100 3.627 ms 0.0681 ms 0.0700 ms 3.648 ms 0.54 0.02
BatchedInserts 100 125.131 ms 2.4820 ms 5.5000 ms 126.176 ms 18.82 1.28
Inserts 100 183.652 ms 2.9962 ms 2.8027 ms 183.832 ms 27.29 1.06
Insert_and_Select_batched 100 125.964 ms 3.5420 ms 10.4437 ms 128.403 ms 18.87 2.15
Insert_and_Select 100 194.423 ms 3.6823 ms 3.2642 ms 194.925 ms 28.93 1.42
Merge_with_OutputInto 500 11.665 ms 0.2567 ms 0.7569 ms 11.631 ms 1.00 0.00
Merge_with_Output 500 10.024 ms 0.2173 ms 0.6372 ms 10.080 ms 0.86 0.08
Merge_with_Output_with_position 500 10.345 ms 0.2065 ms 0.5723 ms 10.365 ms 0.89 0.08
Merge_without_Output 500 9.723 ms 0.2080 ms 0.6099 ms 9.781 ms 0.84 0.08
Insert_row_values_without_Output 500 9.837 ms 0.1939 ms 0.4257 ms 9.905 ms 0.84 0.07
BatchedInserts 500 618.321 ms 12.3487 ms 15.1653 ms 615.345 ms 53.02 2.84
Inserts 500 925.531 ms 18.1276 ms 31.7490 ms 930.170 ms 79.33 5.71
Insert_and_Select_batched 500 655.495 ms 12.8008 ms 22.7535 ms 660.814 ms 56.23 3.96
Insert_and_Select 500 992.402 ms 14.2731 ms 12.6527 ms 992.448 ms 86.45 4.95
Benchmark code ```c# BenchmarkRunner.Run(); public class SequenceBenchmark { const string ConnectionString = "Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0;Encrypt=false"; private SqlCommand _command; private SqlConnection _connection; [Params(1, 2, 3, 4, 5, 8, 10, 50, 100, 500)] // [Params(1)] public int Rows { get; set; } private async Task Setup() { _connection = new SqlConnection(ConnectionString); await _connection.OpenAsync(); await using var cmd = new SqlCommand(@" DROP TABLE IF EXISTS [Foo]; CREATE TABLE [Foo] ( [Id] int IDENTITY PRIMARY KEY, [Bar] int );", _connection); await cmd.ExecuteNonQueryAsync(); } [GlobalSetup(Target = nameof(Merge_with_OutputInto))] public async Task Setup_Merge_with_OutputInto() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"DECLARE @inserted0 TABLE ([Id] int, [_Position] [int]); MERGE [Foo] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i}, {i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, 8)); } builder.Append(@") AS i ([Bar], _Position) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Bar]) VALUES (i.[Bar]) OUTPUT INSERTED.[Id], i._Position INTO @inserted0; SELECT [i].[Id] FROM @inserted0 i ORDER BY [i].[_Position];"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Merge_with_Output))] public async Task Setup_Merge_with_Output() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"MERGE [Foo] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, 8)); } builder.Append(@") AS i ([Bar]) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Bar]) VALUES (i.[Bar]) OUTPUT INSERTED.[Id];"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Merge_with_Output_with_position))] public async Task Setup_Merge_with_Output_with_position() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"MERGE [Foo] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i}, {i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, 8)); } builder.Append(@") AS i ([Bar], _Position) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Bar]) VALUES (i.[Bar]) OUTPUT INSERTED.[Id], i._Position;"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Merge_without_Output))] public async Task Setup_Merge_without_Output() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.AppendLine(@"MERGE [Foo] USING ( VALUES"); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i})"); if (i < Rows - 1) builder.AppendLine(","); _command.Parameters.Add(new("p" + i, 8)); } builder.Append(@") AS i ([Bar]) ON 1=0 WHEN NOT MATCHED THEN INSERT ([Bar]) VALUES (i.[Bar]);"); _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Insert_row_values_without_Output))] public async Task Setup_Insert_row_values_without_Output() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); builder.Append(@"INSERT INTO [Foo] ([Bar]) VALUES "); for (var i = 0; i < Rows; i++) { builder.Append($"(@p{i})"); if (i < Rows - 1) builder.Append(","); _command.Parameters.Add(new("p" + i, 8)); } _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(BatchedInserts))] public async Task Setup_BatchInserts() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); for (var i = 0; i < Rows; i++) { if (i > 0) builder.Append("; "); builder.Append($"INSERT INTO [Foo] ([Bar]) VALUES (@p{i})"); _command.Parameters.Add(new("p" + i, 8)); } _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Inserts))] public async Task Setup_Inserts() { await Setup(); _command = new SqlCommand("INSERT INTO [Foo] ([Bar]) VALUES (@p0)", _connection); _command.Parameters.AddWithValue("p0", 8); } [GlobalSetup(Target = nameof(Insert_and_Select_batched))] public async Task Setup_Insert_and_Select_batched() { await Setup(); _command = new SqlCommand { Connection = _connection }; var builder = new StringBuilder(); for (var i = 0; i < Rows; i++) { if (i > 0) builder.Append("; "); builder.Append(@$"INSERT INTO [Foo] ([Bar]) VALUES (@p{i}); SELECT [Id] FROM [Blogs] WHERE @@ROWCOUNT = 1 AND [Id] = scope_identity();"); _command.Parameters.Add(new("p" + i, 8)); } _command.CommandText = builder.ToString(); } [GlobalSetup(Target = nameof(Insert_and_Select))] public async Task Setup_Insert_and_Select() { await Setup(); _command = new SqlCommand(@"INSERT INTO [Foo] ([Bar]) VALUES (@p0); SELECT [Id] FROM [Blogs] WHERE @@ROWCOUNT = 1 AND [Id] = scope_identity()", _connection); _command.Parameters.AddWithValue("p0", 8); } [Benchmark(Baseline = true)] public async Task Merge_with_OutputInto() { using var reader = await _command.ExecuteReaderAsync(); while (reader.Read()) _ = reader.GetInt32(0); } [Benchmark] public async Task Merge_with_Output() { using var reader = await _command.ExecuteReaderAsync(); while (reader.Read()) _ = reader.GetInt32(0); } [Benchmark] public async Task Merge_with_Output_with_position() { using var reader = await _command.ExecuteReaderAsync(); while (reader.Read()) _ = reader.GetInt32(0); } [Benchmark] public async Task Merge_without_Output() => await _command.ExecuteNonQueryAsync(); [Benchmark] public async Task Insert_row_values_without_Output() => await _command.ExecuteNonQueryAsync(); [Benchmark] public async Task BatchedInserts() => await _command.ExecuteNonQueryAsync(); [Benchmark] public async Task Inserts() { for (var i = 0; i < Rows; i++) await _command.ExecuteNonQueryAsync(); } [Benchmark] public async Task Insert_and_Select_batched() { using var reader = await _command.ExecuteReaderAsync(); while (reader.Read()) _ = reader.GetInt32(0); } [Benchmark] public async Task Insert_and_Select() { for (var i = 0; i < Rows; i++) _ = await _command.ExecuteScalarAsync(); } [GlobalCleanup] public ValueTask Cleanup() => _connection.DisposeAsync(); } ```
AndriySvyryd commented 2 years ago

@roji You'd need to also take into account the client-side lookup for OUTPUT with position, though I doubt it will be significant.

roji commented 2 years ago

@AndriySvyryd you're right. Ideally that would be a single array/list lookup per row, which should be really negligible. But in any case, assuming we move forward with this, I'll do proper benchmarking on the final implementation too.

ajcvickers commented 2 years ago

Note for triage: clearing milestone since nobody is assigned.

roji commented 2 years ago

Note: updated the above benchmark with more techniques: batched and unbatched INSERT+SELECT for IDENTITY, as well as inserting without getting generated values back.

roji commented 2 years ago

Design decisions:

GSPP commented 2 years ago

The more I read about your research the more I realize how difficult this design space is to navigate. As a customer, I appreciate your efforts. Performance is important to the systems that I work on. 👍

roji commented 2 years ago

@GSPP thanks for the kind words... Yeah, I admit I did not anticipate this complexity when starting to look at this - SQL Server certainly makes life difficult around this area. Any feedback on the direction we take is great appreciated.

fowl2 commented 2 years ago

Would it be possible to consult the SQL Server team, perhaps there's something they could advise (or even improve!) from their side?

It seems like this specific pattern could be recognized in the query planner, and be made to be equivalent.

roji commented 2 years ago

@fowl2 which pattern are you suggesting would be recognized in the query planner?

fowl2 commented 2 years ago

@roji The currently generated query with the temporary table variable, from the initial issue description:

DECLARE @inserted0 TABLE ([Id] int);
INSERT INTO [Blogs] ([Name]) OUTPUT INSERTED.[Id] INTO @inserted0 VALUES (@p0);
SELECT [i].[Id] FROM @inserted0 i;

Unless I'm missing something (entirely possible!), local variables are only alive within a batch, so there's no semantic reason for this to be slower, modulo parser/optimizer/transfer time.

Perhaps SQL Server's internals prevent cross-statement optimization, but it doesn't hurt to ask, does it? Perhaps there's some other way that hasn't been considered yet.

At a stretch, even removing the restriction on using 'OUTPUT without the INTO' in the presence of triggers might be possible, although that's a bit of a bigger ask 😅.

roji commented 2 years ago

I agree that SQL Server could definitely improve things, but we need to support current and past versions of SQL Server in any case, so that wouldn't help us here. I definitely encourage everyone to raise these issues with the SQL Server team, and will do my best to do so internally as well.

bgribaudo commented 2 years ago

@fowl2

At a stretch, even removing the restriction on using 'OUTPUT without the INTO' in the presence of triggers might be possible, although that's a bit of a bigger ask 😅.

This would be nice! True, it wouldn't fix backward compatibility, but it sure might help going forward.

roji commented 2 years ago

Note: split the global trigger opt-in (as opposed to the per-table one) out to #27372 because we want to implement it as a custom convention.

roji commented 2 years ago

For the trigger metadata API, all triggers have a name, so we'll introduce an API based on that. Triggers are generally created in the same schema as their target table; most databases do not allow otherwise (see below for details).

SQL Server ```sql CREATE SCHEMA dbo2; DROP TABLE IF EXISTS dbo.Foo; CREATE TABLE dbo.Foo (Id INT PRIMARY KEY IDENTITY, Data INT); CREATE OR ALTER TRIGGER [dbo2].Foo_Trigger ON [dbo].Foo FOR INSERT, UPDATE, DELETE AS BEGIN IF @@ROWCOUNT = 0 return END; ``` Error: `Cannot create trigger 'dbo2.Foo_Trigger' because its schema is different from the schema of the target table or view.`
PostgreSQL The [docs](https://www.postgresql.org/docs/current/sql-createtrigger.html) explicitly disallow specifying a schema: > The name cannot be schema-qualified — the trigger inherits the schema of its table.
SQLite SQLite doesn't support schemas - "schema" refers to a different database instead. [The CREATE TRIGGER docs](https://sqlite.org/lang_createtrigger.html) do discuss the possibility of creating a temporary trigger in another database, but that doesn't seem relevant to us.
MySQL The situation is similar to SQLite - schemas are databases.
roji commented 2 years ago

There's unfortunately another limitation of the OUTPUT clause: it cannot reference a computed column that references a user function (if that function does any querying). This was flagged in #6044, and the solution there was, as with the trigger restriction, to move away from OUTPUT to OUTPUT INTO.

As suggested by @AndriySvyryd in https://github.com/dotnet/efcore/issues/6044#issuecomment-235058064, we can provide a metadata API to work around this. We may as well just make it on the table as a whole (rather than specifically on the column), basically an opting out of OUTPUT. This could also cover the trigger instead of a trigger-specific API, though I've already implement that (and it does go in the direction of supporting trigger management).

BTW like with triggers, here too we should be able to identify the error and refer the user to guidance.

AndriySvyryd commented 2 years ago

I'd still prefer the trigger API as it's more general and will not need changes if either we decide on a different command later or SQL Server adds support for OUTPUT. We can document the computed column case in the breaking changes.

roji commented 2 years ago

@AndriySvyryd so do you mean you'd like the two APIs, i.e. both HasTrigger and e.g. DisableOutputClause? I think we need the latter for the computed column+querying function case, the question in my mind is if we also add the former, given that we don't plan to actually implement triggers yet...

AndriySvyryd commented 2 years ago

I don't think we need DisableOutputClause, the cases where it would be needed without triggers are rare enough to suggest using HasTrigger instead.

roji commented 2 years ago

It seems a bit weird to tell people to put a fake HasTrigger because they have a computed column with a function... But I guess it doesn't matter much either way. I'll do that.

GSPP commented 2 years ago

I can maybe add one more thing: It was noticed that inserting into a table variable is really slow. In case it isn't known to you: Table variables are just normal tables under the hood. Their lifetime is managed automatically. They live in tempdb, so they benefit from reduced write-back to disk (which all of tempdb has as an optimization).

The write code path is, to my knowledge, identical to a normal table write. Table writes are expensive in SQL Server, and in most RDBMSes.

So anything using table variables is unsuitable for common/mainstream ORM scenarios. It must be a special API or opt-in. I believe that the performance cost is a showstopper otherwise.

roji commented 2 years ago

@GSPP that is pretty much what this issue is about - EF Core currently implements many many insert types by using table variables, which as you say is quite slow (see these benchmarks above which compare various patterns for insertion in SQL Server). The main point of this issue (which I'm currently working on) is to stop using a table variable, and to use simple OUTPUT instead (without INTO). Unfortunately that doesn't work when there are triggers defined on the table, or when a computed column calls a querying user function.

roji commented 2 years ago

/cc @lauxjpn this replaces the default update pipeline to generate a RETURNING clause for getting back generated values (#27547 will do the same for UPDATE and DELETE, for both generated values and for a constant "1" for the EF concurrency check). This reduces the number of statements needed, and when it's just one statement, also removes the transaction; it works well on PostgreSQL, SQLite, SQL Server (where the clause is called OUTPUT), and probably Firebird.

Unfortunately, MySQL doesn't seem to have a RETURNING clause (though MariaDB does)... So you'll have to customize your UpdateSqlGenerator (maybe you've already done that...). This should be really easy, and note that the SQL Server UpdateSqlGenerator also has a mode where it can't use OUTPUT (when there are triggers), so you can look at that for inspiration (though there's lots of extra complexity there). As always don't hesitate to reach out!

mavenius commented 2 years ago

I stumbled into this thread because of an issue that I think you have covered, but I wanted to confirm the use case, in case I am misunderstanding and end up being left out :-)

I would like to receive an output variable that is neither computed nor inserted as part of an update command.

E.g. let's say I have a processing queue table that holds records that need to be processed. I would like to update the start date of an entry to indicate that it is being worked on and atomically receive back the Id (and possibly other information) of the affected record. In SQL, this would be something like the following:

UPDATE TOP (1) Processing_Table SET Start_Date = @Start_Date OUTPUT INSERTED.Id WHERE Start_Date IS NULL

(Or @Start_Date could be replaced with GetDate(), e.g., but hopefully what I am getting at makes sense.)

If what I'm asking is already covered with what you are doing, then thanks. And if it's already possible, even better; I'll keep looking at the docs :-)

roji commented 2 years ago

@mavenius the work covered here is on the EF Core update pipeline, which works together with EF's change tracking; it does not allow for arbitrary updates based on any column or condition (e.g. Start_Date in your example above).

For allowing users to express arbitrary update/delete (via LINQ), see #795 which is also planned for 7.0. However, note that allowing you to project out columns (INSERT.Id above) may very well end up out of scope; we're initially focusing on the more basic functionality first (see https://github.com/dotnet/efcore/issues/795#issuecomment-1027878116 for the 7.0 plan, what you want is covered by "BulkDelete/BulkUpdate operators which return an IQueryable, for getting back columns of the affected rows").

robertsuditu commented 2 years ago

When I'm trying to save into a table with computed columns, I get this error:

Could not save changes because the target table has computed column with a function that performs data access.
Please configure your entity type accordingly, see https://aka.ms/efcore-docs-sqlserver-save-changes-and-computed-columns for more information.

Then I follow the link in the suggested solution (https://aka.ms/efcore-docs-sqlserver-save-changes-and-computed-columns) and the "mitigation" says to use the HasTrigger("SomeTrigger") method. If I do that in my code - even if I actually don't have any trigger on that table - then the save operation completes successfully. But I don't like the "solution/mitigation" because:

  1. I don't have a trigger on that table, I only have a calculated column
  2. I'm using the database-first approach and each time I do the scaffolding (to update the code from my db modifications), my "HasTrigger(...)" code gets overwritten

Can you, please, give me a better solution?

Thank you.

ErikEJ commented 2 years ago

You should be able to add the HasTrigger statement in a partial OnModelCreatingPartial method in a partial class that will not get overwritten.

robertsuditu commented 2 years ago

You should be able to add the HasTrigger statement in a partial OnModelCreatingPartial method in a partial class that will not get overwritten.

I agree, but that would fix only my second point. What about the first one? I should not "lie" to EF about having a trigger, when I don't have one. I should have something like

.... tb => tb.HasCalculatedColum(....)
ErikEJ commented 2 years ago

For the first one, the docs are being updated.

robertsuditu commented 2 years ago

For the first one, the docs are being updated.

And do we have an estimation about when they would be ready?

ErikEJ commented 2 years ago

Next time the docs are published, but you can have a peek here: https://github.com/dotnet/EntityFramework.Docs/pull/4131/files

gpovedaDev commented 1 year ago

Hi, i dont see an example of how to solve this breaking change when i try to do an update to a table with computed columns. https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-7.0/breaking-changes#sqlserver-tables-with-triggers Looking at this url i only see alternatives when the table have triggers. Any ideas?

ErikEJ commented 1 year ago

@gpovedaDev just pretend the table has a trigger.

gpovedaDev commented 1 year ago

@gpovedaDev just pretend the table has a trigger.

I wonder if i need to specify something to indicate that the name i put in HasTrigger comes from a computed column. Lets say i have a table called transaction with a computed column Commission i tried this in OnModelCreating method : modelBuilder.Entity() .ToTable(tb => tb.HasTrigger("Commission")); but it did not work.