Open kijewskimateusz opened 2 years ago
Did this work? This approach looks like it would not provide the required parallelism. the function 'withcolumn' is being called out in every iteration causing the API call to happen in sequence rather than in parallel.
@somuworld - apologies for the late reply. I agree and you are correct, this wouldn't create the parallelism that is needed. I supplied a different answer, albeit via email.
The problem with this solution and as you rightly point out, the dataframe will execute a task for each iteration of the loop. A better solution would be to first identify how many pages are available, build a dataframe for each page and then apply an action. This would execute in parallel.
The catch would be if the REST API wasn't able to provide a sum of pages. In this case I would argue that Spark isn't the right tool for this type of job. Better to use Python, write the results to file and then use Spark.
yeah and thank you it worked for me like a charm. But one question is how do we handle expiring tokens? Our API uses OAutho2.0 and we have multiple API endpoints where we paginate through multiple pages which takes long time to complete. The bearer access token is valid only for 15mins and I'm able to renew token when old one expires within UDF but unable to pass it on to next API call. I tried using databricks delta tables (we run our code on databricks) but looks like we can't access them through UDF when called from dataframe.
Only option I see is to write the access token to a file and refer it from UDF, whenever a token expires renew it and update file from UDF itself. But I'm not happy with this approach as this involves reading the file for each and every API call which might be effect the performance. Any thoughts @jamesshocking
@somuworld, the only alternative that i can think of is to use the foreachpartition method on the rdd object, which you can access via dataframe.rdd.
Sadly broadcast variables are read only and accumulators are numeric leaving either your option (either a file or memcache of sorts) or the the rdd route.
As soon as you execute an action, the foreachpartition code will execute. You pass an object to the function and there you'll be able to declare a singleton representing your shared token along with a separate function to execute your Http request.
Hello!
First, thank you for detailed guide on how to handle api requests using worker nodes. Based on your manual, I'm trying to implement solution alike for paginated APIs. I've come up with an idea to create while loop to execute api requests for each page, but I'm not sure if this solution will support parallel spark execution, when it comes to extracting data hidden in
results
struct.