Implemented caching into RapidApp with the purpose to cache the
massive colspec_test calls which are happening at the startup of
RapidApp.
The cache is implemented via CHI, and uses by default a temporary
directory of the system, that is build based on your application name,
the RapidApp Version and the DBIx::Class Version. With the time and
with more use cases, we will finetune this key, but you would also be
able set it via cache_key, to set your own logic base for this.
The bigger problem here, was finding the right spot to cache. A
caching directly at colspec_test was the first plan, but this sadly
failed horrible, as the amount of recursive calls inside colspec_test
and the amount of calls to colspec_test had the startup time exploded.
But then I found the right place a step higher at the
colspec_select_columns function, which is also not using more
information as colspec_test itself, which allowed us to (nearly) only
use the colspecs and the columns as caching key. We additionally spice
the key with a deparsed md5ed stringification of the functions evolved
in the process of generation this cache, this allows even to edit
those functions and the cache realizes that it needs to be rebuild.
The speed of the startup time before the cache is there is probably
5% higher, but the startup time with a filled cache will reduce to
around 33% of the previous startup time without any cache
implementation. Those numbers come from a project with a giant amount
of databases, tables and columns, the speed gain will not be
significant on projects with low amount of database tables and
columns.
Implemented caching into RapidApp with the purpose to cache the massive colspec_test calls which are happening at the startup of RapidApp.
The cache is implemented via CHI, and uses by default a temporary directory of the system, that is build based on your application name, the RapidApp Version and the DBIx::Class Version. With the time and with more use cases, we will finetune this key, but you would also be able set it via cache_key, to set your own logic base for this.
The bigger problem here, was finding the right spot to cache. A caching directly at colspec_test was the first plan, but this sadly failed horrible, as the amount of recursive calls inside colspec_test and the amount of calls to colspec_test had the startup time exploded. But then I found the right place a step higher at the colspec_select_columns function, which is also not using more information as colspec_test itself, which allowed us to (nearly) only use the colspecs and the columns as caching key. We additionally spice the key with a deparsed md5ed stringification of the functions evolved in the process of generation this cache, this allows even to edit those functions and the cache realizes that it needs to be rebuild.
The speed of the startup time before the cache is there is probably 5% higher, but the startup time with a filled cache will reduce to around 33% of the previous startup time without any cache implementation. Those numbers come from a project with a giant amount of databases, tables and columns, the speed gain will not be significant on projects with low amount of database tables and columns.