Closed tolbertam closed 7 years ago
Went ahead and squashed, thanks!
@tolbertam - those options accept both fixed and ratio depending of presence of dot. So if you want to set 1024M and 2 cores just write 1g and 2.
@tolbertam I tested your changes and they look good. However, an earlier change (https://github.com/pcmanus/ccm/commit/4b2957e255fbe663485dccdae0a6eb6a52b3fe7d) kind of broke DSEFS enablement, so maybe we should fix that here as well?
Here's a fix that will work for the set_workloads() function, making sure that if both dsefs and spark workloads are selected, DSEFS is enabled:
...
if 'spark' in self.workloads:
if 'dsefs' in self.workloads:
dsefs_enabled = True
else:
dsefs_enabled = False
dse_options = {'dsefs_options': {'enabled': dsefs_enabled,
'work_dir': os.path.join(self.get_path(), 'dsefs'),
'data_directories': [{'dir': os.path.join(self.get_path(), 'dsefs', 'data')}]}}
...
Thank you both for the feedback. I'll make both changes in a few minutes 👍
Addressed feedback and squashed commit (to make merging easier). Here are the changes made by themselves: https://gist.github.com/tolbertam/79b8799611772a339412fc695cc8bcb7
+1, I tested with a couple different old & new versions of DSE and all looks good.
Thanks Andy!
Newer versions of DSE allow configuring spark resource settings via dse.yaml in the following manner:
This sets the resources used by spark as a ratio of system resources instead of fixed. I chose 0.1 memory and 0.2 cores to get close to the previous config (1024M / 2 cores) on a modern mac book pro (1.6GB ram, 2 cores). The environment variables
SPARK_WORKER_MEMORY
andSPARK_WORKER_CORES
are still used for older versions.