haskell / attoparsec

A fast Haskell library for parsing ByteStrings
http://hackage.haskell.org/package/attoparsec
Other
512 stars 93 forks source link

Some bottlenecks #128

Closed varosi closed 7 years ago

varosi commented 7 years ago

I'm trying to make a bit faster this test: https://bitbucket.org/ewanhiggs/csv-game

And I hit this bottleneck in Attoparsec via Cassava library:

 COST CENTRE       MODULE                            SRC                                                    %time %alloc
 >>=.\.succ'       Data.Attoparsec.Internal.Types    Data\Attoparsec\Internal\Types.hs:146:13-76             39.9   17.3
 >>=.\             Data.Attoparsec.Internal.Types    Data\Attoparsec\Internal\Types.hs:(146,9)-(147,44)      16.6   18.2

Do you think that something could be inlined more or strictified?

bgamari commented 7 years ago

I would be very suspicious of that profile. Keep in mind that the cost center profiler interferes with optimizations in order to preserve cost centers (either added by you with SCC pragmas or by the compiler with -fprof-auto and friends). There is a very good chance that the sites you point out do in fact inline away in an unprofiled build. You would need to look at the simplified Core confirm this.

varosi commented 7 years ago

You're absolutely right! Core is much more dense. It's interesting how could I profile optimized code?

bgamari commented 7 years ago

Funny that you ask; I responded to a post to haskell-cafe describing one such (admittedly low-level) mechanism yesterday. Looking Ticky profiling and reading Core are currently the only tools we have for understanding low-level performance.

That being said, I have long been interested in bringing statistical profiling to Haskell and have some work introducing such a profiler in the runtime system.

varosi commented 7 years ago

DWARF support is great! But I'm doomed, because I'm a Windows developer/user. I hope some PDB support GHC to have in future so I could use some good tools, like Intel VTune.