Closed Innixma closed 1 year ago
I've also opened a PR for AutoGluon to officially use 0.4 going forward as the default (although 0.8 still uses 0.1 as default)
Merging this now, but please revert the change (or make it conditional), when a new release with the changed default is available. Thanks!
Updated max_memory to 0.4 as when tested it resulted in no crashes. The 0.1 default is overly conservative. For batch_size=1 in inference, this gives ~2x faster inference speed on average as more frequently models are able to fit into memory using this raised limit.