Right now we spawn a new GEGL process for every request. That means that the process needs to scan for and load the operations each time. This probably takes a non-trivial amount of time, and could probably be reduced by at least caching the lookup information.
Potentially one could also link many operations into a single shared-object, to reduce how many disk reads must be performed.
TODO: quantify how long this typically takes, on Heroku.
Would be nice if GEGL could report this to us, similar to imgflo/imgflo#92
Right now we spawn a new GEGL process for every request. That means that the process needs to scan for and load the operations each time. This probably takes a non-trivial amount of time, and could probably be reduced by at least caching the lookup information.
Potentially one could also link many operations into a single shared-object, to reduce how many disk reads must be performed.
TODO: quantify how long this typically takes, on Heroku. Would be nice if GEGL could report this to us, similar to imgflo/imgflo#92