Our team is developing notebook databricks using SPARK and SCALA. We are working on inserting data in Collections in CosmosDB.
Following multiple guides, we used the configuration field:
"spark.cosmos.throughputControl.globalControl.container" = collection for throughput
"spark.cosmos.throughputControl.targetThroughputThreshold" = 0.2
to limit the use of the RU's in Cosmos but we are noticing that after multiple executions it looks like the library is not considering the limitation we set.
What we are saying is that it is like the limitation is ignored in fact the RU usage can grow up untill 100%. We are opening this issue after Microsoft suggested it.
Our team is developing notebook databricks using SPARK and SCALA. We are working on inserting data in Collections in CosmosDB.
Following multiple guides, we used the configuration field: "spark.cosmos.throughputControl.globalControl.container" = collection for throughput "spark.cosmos.throughputControl.targetThroughputThreshold" = 0.2 to limit the use of the RU's in Cosmos but we are noticing that after multiple executions it looks like the library is not considering the limitation we set.
What we are saying is that it is like the limitation is ignored in fact the RU usage can grow up untill 100%. We are opening this issue after Microsoft suggested it.
Thanks for your feedback