sorry for late answer, for some reason I didn't receive email notification.
It is possible and we have developed new version of the library which provides higher flexibility to fit user's needs. The new version is supposed to be released soon (in 1-2 months).
In the new version, we have achieved big speed-ups for both feature computation and the inference calculation. However, of course things could be done even faster with parallelizations, e.g. using CUDA to calculate the features seems to be straightforward option and could help the overall speed a lot. The new version is designed with respect to possible extensions, so one can easily add the desired functionality.
Very impressive project and easy to implement.