I recently did a $ hasktags -e ~/ghc and it took around of 3 minutes. Profiling and looking at the cost-centers it resulted in the following graph
(notice the time it took to completion). More profiling revealed that the findThings function retained much data, as a short function I though readFile was the culprit. Changing the implementation for the one of Lazy ByteStrings as
Resulted in the following graph
(Notice the running time), we reduced maximum allocation almost 4 times.
As this programs isn't a long running one, the pitfalls of lazy IO don't exactly apply, yet using a streaming framework would also solve the problem at the cost of more imports. It seems a good tradeoff. What do you think?
Hello, thank for you work
I recently did a
(notice the time it took to completion). More profiling revealed that the
$ hasktags -e ~/ghc
and it took around of 3 minutes. Profiling and looking at the cost-centers it resulted in the following graphfindThings
function retained much data, as a short function I though readFile was the culprit. Changing the implementation for the one of Lazy ByteStrings asResulted in the following graph
(Notice the running time), we reduced maximum allocation almost 4 times.
As this programs isn't a long running one, the pitfalls of lazy IO don't exactly apply, yet using a streaming framework would also solve the problem at the cost of more imports. It seems a good tradeoff. What do you think?