Closed zpappa closed 5 years ago
Sorry for the issue you're facing, but as you pointed out it's not a good forum to discuss this.
Fortunately, we do have a good way to ask questions like this. Please send an email to AskCosmosDB@microsoft.com for issues with MongoDB sutff.
Was this issue ever resolved? We are facing the same issue, and I can see on Stack Overflow others are as well, but no solution that I can find. Thank you.
Was this issue ever resolved? We are facing the same issue, and I can see on Stack Overflow others are as well, but no solution that I can find. Thank you.
I need help too, I can't find a solution yet.
Description: I realize this isn't the correct forum for this bug - as this is with the MongoDB api not the cosmosdb sql api, but I don't see a related project I can post an issue against for this.
When reading data using Azure Databricks against a Cosmos DB MongoDB API:
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").option("database", "ingress").option("collection", "ingress").option("pipeline", pipeline).load() cnt = df.count()
When trying to perform a find, I receive the following error:
This is completely non-deterministic, sometimes it works sometimes it does not (same pipeline details being provided). The collection being read has roughly 170mb of data over 1mm records. I have 5000 RUs for this collection.,