Open MayankSri opened 6 years ago
@MayankSri could you please send mail to askcosmosdb@microsoft.conm with full exception details and account information and we will take a look into it.
@MayankSri closing the issue. Feel free to re-active in-case your issues is not addressed.
Happened to me connecting to local emulator cosmodb service v 1.19.102.5. First time it happened, it was a "ServiceUnavailable" exception. it went on for several hours, and then magically started working again. Second time it happened, several days later, it was throwing GoneException; I performed the "Reset Data" context-menu operation from the systray CosmoDb icon, after which the code was able to connect again.
I don't know the two episodes are truly related. In the first occurrence, I feel it also could not connect to CosmoDb instances hosted on Azure.
Same thing happening to me. I susspect this may be related to IP address of the machine changing.
Please re-open this issue as the problem persists in version 2.0.0 of the emulator. Resetting data every time the issue occurs is not a workable solution in the long run.
I'm happy to help investigate the issue.
@jkonecki Can you please follow the instructions for collecting trace logs here and send them to askcosmosdb@microsoft.com? Also can you please look for any DocumentDB*.dmp minidumps in either %LOCALAPPDATA%\CrashDumps or %SystemDrive%\wfroot and attach them them as well?
@milismsft I will - please note it may take a few days before the issue to occur again.
@milismsft I've just emailed trace files to askcosmosdb@microsoft.com
I've just noticed that the issue occurs when Direct / TCP connection policy is used. I can switch to Garteway / HTTP and connect without problems. Hope this helps in your investigation.
I just sent a repro trace to askcosmosdb. Thanks.
More context about the reported issue (from offline thread):
Sorry for the delay, the issue comes and goes, so it took me a while to repro.
The issue is with Emulator (the MSI-installed version). It seems (I haven’t confirmed) like Emulator is unhappy if I’m trying to access a collection that I created before machine was restarted.
DocumentClientException: Message: {"Errors":["The requested resource is no longer available at the server."]} ActivityId: 335ecfb6-b94d-4a5e-b7f9-c2f9812f4a94, Request URI: /apps/DocDbApp/services/DocDbServer20/partitions/a4cb4960-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats: RequestStartTime: 2019-08-14T00:56:44.7105358Z, RequestEndTime: 2019-08-14T00:56:44.7125112Z, Number of regions attempted: 1 ResponseTime: 2019-08-14T00:56:44.7125112Z, StoreResult: StorePhysicalAddress: rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer20/partitions/a4cb4960-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, LSN: 541, GlobalCommittedLsn: -1, PartitionKeyRangeId: , IsValid: True, StatusCode: 410, SubStatusCode: 1000, RequestCharge: 0, ItemLSN: -1, SessionToken: , UsingLocalLSN: True, TransportException: null, ResourceType: Collection, OperationType: Read , SDK: Microsoft.Azure.Documents.Common/2.2.0.0, Windows/10.0.18890 documentdb-netcore-sdk/2.4.0
Still happening
I have the same problem System.AggregateException: One or more errors occurred. ---> Microsoft.Azure.Documents.ServiceUnavailableException: Service is currently unavailable. More info: https://aka.ms/cosmosdb-tsg-service-unavailable ActivityId: 9f432436-a2bf-4367-b012-867599c6371f, RequestStartTime: 2023-06-16T16:58:20.5860700Z, RequestEndTime: 2023-06-16T16:58:52.9886735Z, Number of regions attempted:1 {"systemHistory":[{"dateUtc":"2023-06-16T16:58:20.5850648Z","cpu":4.824,"memory":2711580.000,"threadInfo":{"isThreadStarving":"no info","availableThreads":32764,"minThreads":8,"maxThreads":32767},"numberOfOpenTcpConnection":0},{"dateUtc":"2023-06-16T16:58:30.6066921Z","cpu":8.676,"memory":2684852.000,"threadInfo":{"isThreadStarving":"False","threadWaitIntervalInMs":0.4164,"availableThreads":32765,"minThreads":8,"maxThreads":32767},"numberOfOpenTcpConnection":0},{"dateUtc":"2023-06-16T16:58:40.6107306Z","cpu":11.016,"memory":2499516.000,"threadInfo":{"isThreadStarving":"False","threadWaitIntervalInMs":0.0448,"availableThreads":32765,"minThreads":8,"maxThreads":32767},"numberOfOpenTcpConnection":0},{"dateUtc":"2023-06-16T16:58:50.6186634Z","cpu":6.630,"memory":2528716.000,"threadInfo":{"isThreadStarving":"False","threadWaitIntervalInMs":0.0905,"availableThreads":32765,"minThreads":8,"maxThreads":32767},"numberOfOpenTcpConnection":0}]} RequestStart: 2023-06-16T16:58:23.1005197Z; ResponseTime: 2023-06-16T16:58:23.1426242Z; StoreResult: StorePhysicalAddress: https://127.0.0.1:10252/apps/DocDbApp/services/DocDbMaster0/partitions/780e44f4-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, LSN: -1, GlobalCommittedLsn: -1, PartitionKeyRangeId: , IsValid: False, StatusCode: 410, SubStatusCode: 20001, RequestCharge: 0, ItemLSN: -1, SessionToken: , UsingLocalLSN: False, TransportException: null, BELatencyMs: , ActivityId: 9f432436-a2bf-4367-b012-867599c6371f, RetryAfterInMs: ; ResourceType: Database, OperationType: Read RequestStart: 2023-06-16T16:58:23.1015209Z; ResponseTime: 2023-06-16T16:58:23.1597531Z; StoreResult: StorePhysicalAddress: https://127.0.0.1:10252/apps/DocDbApp/services/DocDbMaster0/partitions/780e44f4-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, LSN: -1, GlobalCommittedLsn: -1, PartitionKeyRangeId: , IsValid: False, StatusCode: 410, SubStatusCode: 20001, RequestCharge: 0, ItemLSN: -1, SessionToken: , UsingLocalLSN: False, TransportException: null, BELatencyMs: , ActivityId: 9f432436-a2bf-4367-b012-867599c6371f, RetryAfterInMs: ; ResourceType: Database, OperationType: Read
Stuill happenning in 2024. Data Factory job is frequently, but not always, failing
TraceComponentId: TransferTask TraceMessageId: RowBatchSinkWriteFailed @logId: Warning jobId: 73e54d9f-8a2d-399d-e082-82a2002de6b8 activityId: 294cb734-b0d9-40d4-9e56-b0b43dfcaa3f eventId: RowBatchSinkWriteFailed message: 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Documents failed to import. Error message:{"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"The requested resource is no longer available at the server.\"]}\r\nStack trace: Error: {\"Errors\":[\"The requested resource is no longer available at the server.\"]}\n at createCallback (script.js:6350:26)\n at Anonymous function (script.js:687:29)"]} ActivityId: 12b900a5-0bf1-4a2e-b226-0cb4585d372c, documentdb-dotnet-sdk/2.5.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0.,Source=Microsoft.DataTransfer.DocumentDbManagement,StackTrace= at Microsoft.DataTransfer.DocumentDbManagement.DocumentDbUtility.RetryDocumentsRowByRow(BulkImportResponse result, Guid tId, Guid aId, Func2 executeBulkImport, IErrorRowOutput errorRowOutput, CancellationToken cancellationToken, Boolean enableSkipFaultyRow) at Microsoft.DataTransfer.ClientLibrary.DocumentDb.Sink.DocumentDbJObjectSink.Write(IBatch
1 dataReader) at Microsoft.DataTransfer.Runtime.RowBatchSinkStageProcessor2.<>c__DisplayClass23_1.<RowBatchSinkInternal>b__0(),''Type=Microsoft.Azure.Documents.DocumentClientException,Message={"Errors":["Encountered exception while executing function. Exception = Error: {\"Errors\":[\"The requested resource is no longer available at the server.\"]}\r\nStack trace: Error: {\"Errors\":[\"The requested resource is no longer available at the server.\"]}\n at createCallback (script.js:6350:26)\n at Anonymous function (script.js:687:29)"]} ActivityId: 12b900a5-0bf1-4a2e-b226-0cb4585d372c, documentdb-dotnet-sdk/2.5.1 Host/64-bit MicrosoftWindowsNT/6.2.9200.0,Source=Microsoft.Azure.Documents.Client,StackTrace= at Microsoft.Azure.Documents.GatewayStoreClient.<ParseResponseAsync>d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Documents.GatewayStoreClient.<InvokeAsync>d__4.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Documents.GatewayStoreModel.<ProcessMessageAsync>d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Documents.Client.DocumentClient.<ProcessRequestAsync>d__153.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Documents.Client.DocumentClient.<ExecuteStoredProcedurePrivateAsync>d__287
1.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Documents.BackoffRetryUtility1.<ExecuteRetryAsync>d__5.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Azure.Documents.ShouldRetryResult.ThrowIfDoneTrying(ExceptionDispatchInfo capturedException) at Microsoft.Azure.Documents.BackoffRetryUtility
1.
I am getting this error intermittently while the document exists in the collection: