zzzprojects / Dapper-Plus

Dapper Plus - High-Efficient Bulk Actions (Insert, Update, Delete, and Merge) for .NET
https://dapper-plus.net/
380 stars 84 forks source link

A transport-level error for NVarchar(max) column when using auditing #122

Closed ElenaShlykova closed 1 year ago

ElenaShlykova commented 1 year ago

Description

I have a table with several columns and one of them is with the nvarchar(max) type. We also need to audit all changes in this table. Everything works fine until I put the large value (around 4000 chars) in the nvarchar(max) column. In this case I get the following error:

Exception

Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

Exception message:
Stack trace:
Exception: System.Exception: A timeout error occured (Current Timeout = 120). Please increase the timeout globally: BulkOperationManager.BulkOperationBuilder = operation => operation.BatchTimeout = timeoutValue; or by operation: bulkOperation.BatchTimeout = timeoutValue;
 ---> System.Data.SqlClient.SqlException (0x80131904): Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 35 - An internal exception was caught)
 ---> System.ComponentModel.Win32Exception (258): No error information
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParserStateObject.SNIWritePacket(PacketHandle packet, UInt32& sniError, Boolean canAccumulate, Boolean callerHasConnectionLock)
   at System.Data.SqlClient.TdsParserStateObject.SendAttention(Boolean mustTakeWriteLock)
   at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
   at System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync()
   at System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket()
   at System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer()
   at System.Data.SqlClient.TdsParserStateObject.TryReadUInt32(UInt32& value)
   at System.Data.SqlClient.TdsParserStateObject.TryReadPlpLength(Boolean returnPlpNullIfNull, UInt64& lengthLeft)
   at System.Data.SqlClient.TdsParser.TryGetDataLength(SqlMetaDataPriv colmeta, TdsParserStateObject stateObj, UInt64& length)
   at System.Data.SqlClient.TdsParser.TryProcessColumnHeaderNoNBC(SqlMetaDataPriv col, TdsParserStateObject stateObj, Boolean& isNull, UInt64& length)
   at System.Data.SqlClient.SqlDataReader.TryReadColumnInternal(Int32 i, Boolean readHeaderOnly)
   at System.Data.SqlClient.SqlDataReader.GetValues(Object[] values)
   at System.Data.ProviderBase.SchemaMapping.LoadDataRow()
   at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping)
   at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue)
   at System.Data.Common.DataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords)
   at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
   at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
   at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet)
   at .(DbCommand , BulkOperation , Int32 )
   at .( , DbCommand )
   at .Execute(List`1 actions)
ClientConnectionId:2cdfc433-0b83-4129-80f7-f5f0e082fea3
Error Number:-2,State:0,Class:11
   --- End of inner exception stack trace ---
   at .Execute(List`1 actions)
   at Z.BulkOperations.BulkOperation.Execute()
   at Z.Dapper.Plus.DapperPlusAction.Execute()
   at Z.Dapper.Plus.DapperPlusActionSet`1.AddAction(String mapperKey, DapperPlusActionKind actionKind, TEntity item)
   at Z.Dapper.Plus.DapperPlusActionSet`1..ctor(BaseDapperPlusActionSet oldActionSet, String mapperKey, DapperPlusActionKind actionKind, TEntity item, Func`2[] selectors)
   at Z.Dapper.Plus.DapperPlusActionSet`1.BulkDelete[T](String mapperKey, T item, Func`2[] selectors)
   at MyDao.Delete(DapperPlusActionSet`1 actionSet, Program program)

Further technical details

JonathanMagnan commented 1 year ago

Hello @ElenaShlykova, Thank you for reaching out to ZZZ Projects, due to the holiday vacation we will have limited email access which may cause some delays from December 23 to January 2. We are sorry for any inconvenience.

JonathanMagnan commented 1 year ago

Hello @ElenaShlykova ,

The current error looks to be caused by a timeout.

Approximately how long does it takes without this nvarchar(max) column/with this column but without the auditing, and do you know if there is something such as a trigger that could explain why this is so long?

Is it possible for you to reproduce it in a standalone project that contains the minimum code so we can try it? We already have multiple unit test with a nvarchar(max) column, so there is surely something we are missing here. You can send the project in private here if needed: info@zzzprojects.com

Best Regards,

Jon

ElenaShlykova commented 1 year ago

It looks like the problem is in the generated sql. I got sql generated by dapper.plus using SqlProfiler and got the same error while running this script in Microsoft Sql Management Studio. This is the SQL:

exec sp_executesql N'MERGE INTO [mytable]  AS DestinationTable
USING
(
SELECT TOP 100 PERCENT * FROM (SELECT @0_0 AS [id], @0_1 AS ZZZ_Index) AS StagingTable ORDER BY ZZZ_Index
) AS StagingTable
ON DestinationTable.[id] = StagingTable.[id]
WHEN MATCHED   THEN
    DELETE
OUTPUT
    $action,
    StagingTable.ZZZ_Index,
    DELETED.[payload] AS [payload_zzzdeleted]

;',N'@0_0 bigint,@0_1 nvarchar(19),@0_2 int',@0_0=20466,@0_1=0

This is like a Sql server limitation and it throws an error if we have a huge column in the output. It works once I removed the payload column from the output.

I also found a workaround with dapper.plus. Everything works if I set ForceSelectOutput = true for bulkOperation. In this case, a temporary table is used to retrieve the old values. Please let me know if this is the best solution.

JonathanMagnan commented 1 year ago

Thank you for reaching out to ZZZ Projects, due to the holiday vacation we will have limited email access which may cause some delays. We are sorry for any inconvenience.

JonathanMagnan commented 1 year ago

Hello @ElenaShlykova ,

Thank you for the additional information. We will look more into it and why it happens.

Surely meanwhile, ForceSelectOutput is a great solution. As you probably already have seen, it will OUTPUT values in a variable table before selecting them after.

I will try to give you an update very soon

JonathanMagnan commented 1 year ago

Hello @ElenaShlykova ,

We are probably missing something, but my developer was not able to reproduce it. We tested multiple various difference scenarios with very large content, and outputting directly or in a table was pretty much the same performance (mostly 1-3% difference)

Do you think you could create a runnable project / or providing SQL script that reproduce this performance issue? It doesn’t need to be your project, just a new solution with the minimum code to reproduce the issue. You can send it in private here: info@zzzprojects.com

JonathanMagnan commented 1 year ago

Hello @ElenaShlykova

Unfortunately, since we didn't hear from you I will close this issue.

As previously mentioned, we need a runnable project to be able to assist you.

We will reopen the issue if a project is received.

Feel free to contact us for questions, issues or feedback.

Best Regards,

Jon