awslabs / athena-glue-service-logs

Glue scripts for converting AWS Service Logs for use in Athena
Apache License 2.0
142 stars 46 forks source link

ERROR TaskSetManager: Total size ... is bigger than spark.driver.maxResultSize (1024.0 MB) #14

Open chrs-myrs opened 5 years ago

chrs-myrs commented 5 years ago

I cannot run the CloudFront task without getting this responses.

ERROR TaskSetManager: Total size of serialized results of 3055 tasks (1052.9 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)

dacort commented 5 years ago

Hi @chrs-myrs - this can be an issue if you have a large number of source files that you're trying to convert. As a workaround, can you try setting spark.driver.maxResultSize on the Glue job?

In the "Security configuration, script libraries, and job parameters (optional)". ===> "Job Parameters" section, add the following key --conf and value spark.driver.maxResultSize=2g.

Long term, we may need to find a way to better filter the initial set of inbound files to a smaller set, possibly as part of #12.

RickardCardell commented 4 years ago

While adding spark.driver.maxResultSize=2g or higher, it's also good to increase driver memory so that the allocated memory from Yarn isn't exceeded and results in a failed job. The setting is spark.driver.memory.

Adding two spark configs is done like this: Key: --conf Value: spark.driver.maxResultSize=2g --conf spark.driver.memory=8g

chrs-myrs commented 3 years ago

Setting the maxResultsSize gave us enough to get this to run properly

jpduckwo commented 2 years ago

I'm experiencing this error, but only in subsequent job executions, the first time I run the job even with 100,000s of files in the processing folder (CloudFront logs) it will work with no memory issues. However on subsequent runs it keeps failing. Anyone got any idea? I've been trying move files around and process in batches, but it's a pain. Should this library be able to handle huge file numbers without issues? Or should I be pre-moving into day folders and only processing a day at a time or something?