As a TM1py user of the write_async function, I can specify how many threads should be used for writing.
This allows to speed up the write operation significantly, in an almost linear fashion.
Can we accomplish similar results for data retrieval by parallelizing MDX execution?
With release 1.11 the execute_mdx_dataframe, and execute_mdx_csv functions accept a mdxpy query object instead of the raw query string.
It should be feasible to break the mdxpy query into smaller sub-queries and execute them in parallel.
As a TM1py user of the
write_async
function, I can specify how many threads should be used for writing. This allows to speed up the write operation significantly, in an almost linear fashion.Can we accomplish similar results for data retrieval by parallelizing MDX execution? With release 1.11 the
execute_mdx_dataframe
, andexecute_mdx_csv
functions accept a mdxpy query object instead of the raw query string. It should be feasible to break the mdxpy query into smaller sub-queries and execute them in parallel.