Open vipinsorot opened 1 year ago
You can handle this in a couple of ways:
MaxRetryCount
value (default 5). This will tell the Polly retry policy to wait and retry n
times, based on the setting.This is where the retry policy is defined: https://github.com/AzureCosmosDB/data-migration-desktop-tool/blob/f45805454bf824b163ee166f4982ac2994560447/Extensions/Cosmos/Cosmos.DataTransfer.CosmosExtension/CosmosDataSinkExtension.cs#L98
@joelhulen yup i do have converted my collection configuration from manual to auto scale for RU/s can you plz share an example related to retry mechanism ?
my migration file { "Source": "cosmos-nosql", "Sink": "cosmos-nosql", "SourceSettings": { "ConnectionString": "AccountEndpoint=*", "Database": "test", "Container": "Invoice", "PartitionKeyValue": "/data/dealClaimId", "Query": "SELECT FROM c" }, "SinkSettings": { "ConnectionString": "AccountEndpoint=*", "Database": "test", "Container": "Invoice", "BatchSize": 100, "MaxRetryCount": 5, "RecreateContainer": false, "ConnectionMode": "Gateway", "CreatedContainerMaxThroughput": 1000, "UseAutoscaleForCreatedContainer": true, "InitialRetryDurationMs": 200, "WriteMode": "InsertStream", "IsServerlessAccount": false, "PartitionKeyPath": "/data/dealClaimId" }, "Operations": [] } it's still failing with similar msg
@vipinsorot, the retry mechanism is already implemented. You just need to configure your MaxRetryCount
setting in the config for the Cosmos DB extension, as documented here: https://github.com/AzureCosmosDB/data-migration-desktop-tool/tree/main/Extensions/Cosmos#sink
Try increasing that value to something higher, like 20. As for scale, simply switching from manual to auto-scale RU/s isn't necessarily enough to overcome rate limiting in high-volume loads. You might consider increasing the max RU/s in your auto-scale settings to a much higher number only while executing the tool.
I just saw your last message. Increase MaxRetryCount
under SinkSettings to something like 20, in addition to my notes on a potential auto-scale increase.
@joelhulen the job is existing with below msg
updated migration json: { "Source": "cosmos-nosql", "Sink": "cosmos-nosql", "SourceSettings": { "ConnectionString": "AccountEndpoint=", "Database": "INTEGRATION", "Container": "Invoice", "PartitionKeyValue": "/data/dealClaimId" }, "SinkSettings": { "ConnectionString": "AccountEndpoint=", "Database": "INTEGRATION", "Container": "Invoice", "BatchSize": 100, "MaxRetryCount": 40, "RecreateContainer": false, "ConnectionMode": "Gateway", "CreatedContainerMaxThroughput": 1000, "UseAutoscaleForCreatedContainer": true, "InitialRetryDurationMs": 200, "WriteMode": "InsertStream", "IsServerlessAccount": false, "PartitionKeyPath": "/data/dealClaimId" }, "Operations": [ { "SourceSettings": { "Query": "SELECT * FROM c" }, "SinkSettings": { "Container": "Invoice" } } ] }
Based on the message, I assume that no data was transferred, correct? If not, did you change any other settings besides MaxRetryCount
?
Indeed, no data has been successfully copied over. To tackle this, I've made an adjustment by increasing the maximum Request Units per second (RU/s) to 10,000 specifically for the designated collection
Are you able to run the application in debug mode to dig into why the data isn't copying over? From the log outputs you shared, it looks like there were no errors, per see, just that no data copied over. This could happen if it is unable to access the source data. I wonder if the max request (429) errors were coming from the source Cosmos DB database and not the destination one? Can you try scaling the source up before running the tool and see what happens?
Yup i have already increased the RU/s to 10k for both source and sink
https://github.com/AzureCosmosDB/data-migration-desktop-tool/blob/f45805454bf824b163ee166f4982ac2994560447/Extensions/Cosmos/Cosmos.DataTransfer.CosmosExtension/CosmosDataSourceExtension.cs#L36 call is updating feedIterator.HasMoreResults to false, has led to fetching a total of 0 records.
@joelhulen it worked post refactoring the code
Error-msg: Data transfer failed Microsoft.Azure.Cosmos.CosmosException : Response status code does not indicate success: TooManyRequests (429); Substatus: 3200; ActivityId: cfe3d2b9-8315-4ce4-9833-2ef94a6f7d82; Reason: ( code : TooManyRequests message : Message: {"Errors":["Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Learn more: http://aka.ms/cosmosdb-error-429"]}
Source: cosmos-nosql Sink: cosmos-nosql release:2.1.3