Closed ravikiransharvirala closed 5 months ago
Hello @ravikiransharvirala
More than a connector issue, this is an underlying issue on the capacity and not a connector issue that will require investigation. All connector things remaining same, it could quite be that there are limits hit on parallel runs (or) other workloads using capacity As this will need looking into the cluster, would be great if you can raise a support request (or) alternatively send me details of the cluster on my microsoft handle (ramacg) so that i can take it up further
cc: @asaharn
@ag-ramachandran Thanks for responding. I shared cluster details via email. Please let me know if you haven't received it.
@ravikiransharvirala Not yet!
@ag-ramachandran interesting. I sent it to ramacg at microsoft.
I sent it again without links and images.
@ravikiransharvirala got it now. Will check
@ravikiransharvirala , Please share the logs from STDOUT/ERR/LOG4j so that i can look at time correlation of these errors and capacity too
@ag-ramachandran Sure, will do that.
@ag-ramachandran Sent it as text. Let me know if you haven't received it.
@ravikiransharvirala no exceptions in that log though!
@ag-ramachandran may be the message is truncated. I sent you a new email with Exception stack trace
@ag-ramachandran Do you recommend persisting the dataframe after reading the data from the Kusto connector coz after reading the data from the database and performing transformations on it, I notice the connector making calls to the database throughout the job's execution.
These are the two queries I noticed while running the job (the job needs entire data from the table)
@ravikiransharvirala , Need more specifics. If are trying to read the same data again and again, it makes good reading to cache it. These queries are used to determine how data is read, the internals of reading are different in ForceSingle and ForceDistributed modes. In your case you can set the readMode as ForceDistributed and i think some of these queries would go away.
If in force distributed mode (parquet export) you want to reuse the same file use the transient cache to true.
KUSTO_READ_MODE 'readMode' - Override the connector heuristic to choose between 'Single' and 'Distributed' mode. Options are - 'ForceSingleMode', 'ForceDistributedMode'. Scala and Java users may take these options from com.microsoft.kusto.spark.datasource.ReadMode.
KUSTO_DISTRIBUTED_READ_MODE_TRANSIENT_CACHE When 'Distributed' read mode is used and this is set to 'true', the request query is exported only once and exported data is reused.
Read up more on : https://github.com/Azure/azure-kusto-spark/blob/master/docs/KustoSource.md
P.S. It may not be related to this issue, but are good options to set and try for optimized reads
Describe the bug When writing data from databricks to ADX/Kusto cluster Fabric, I'm seeing ThrottleExceptions that are causing the writes to fail. These started occurring suddenly since the weekend even though no changes were made to the code, Kusto connector or table sizes.
spark env vars: zulu11-ca-amd64
spark version 12.2.x-scala2.12
Kusto Spark Connector Version: com.microsoft.azure.kusto:kusto-spark_3.0_2.12:4.0.2
Code
Kusto Capacity Limits Ingestions: 12
I see the Consumed to 100% while running the job. This wasn't an issue before but from weekend I see below Throttle Exception
com.microsoft.azure.kusto.data.exceptions.ThrottleException: Request was throttled, too many requests.