Closed jordanpadams closed 12 months ago
@jordanpadams what's the best way to get a copy of the label for this product?
@alexdunnjpl a ping is out to the user.
@jordanpadams looking deeper into this error, it appears to be due to imminent exhaustion of the JVM heap on OpenSearch, rather than any one request/product being too large. (Presumably RAM allocation is currently 16GB on that node)
The fix here is to bump up the instance size to cope with peak throughput, and/or incorporate pause/retry behaviour in harvest.
Closing as a duplicate of #125 on that basis, since the fix for that is a fix for this.
@alexdunnjpl nice sleuthing. 🎉
@sjoshi-jpl I see that psa is currently r5.4xlarge.search
(128GB RAM) - did this get bumped up from r5.xlarge.search
(16GB RAM) at some point recently?
@alexdunnjpl yes this was recently bumped up based on our last conversation with @jordanpadams and @tloubrieu-jpl as we discussed how PSA could be as large / resource intensive as GEO.
Checked for duplicates
Yes - I've already checked
🐛 Describe the bug
When I did a harvest of a data set with some very large data products, I get a
data too large
error and the data is not loaded into the Registry.🕵️ Expected behavior
I expected the loaded would be nominally loaded into the Registry.
📜 To Reproduce
🖥 Environment Info
Linux
📚 Version of Software Used
3.7.6
🩺 Test Data / Additional context
TBD
🦄 Related requirements
No response
⚙️ Engineering Details
No response