Closed adamburgess closed 9 years ago
Currently, it will crash and burn if more than one demo is parsed in different threads simultaneously
Yep, that's why we're currently not doing it this way.
this should be fixed by attaching the properties to the DemoParser instead of a static class.
Eww. Yes, that would allow multithreading. But it totally wrecks the bitstream abstraction layer. The real problem are simply those nested bitstreams. As I mentioned over at #22 they're currently necessary because of protobuf design drawbacks. However, I'm working on bitstream-based protobuf code that should be both significantly faster and eliminate the need for nested bitstreams.
@moritzuehling What about instance baseline parsing? Is there a specific reason for keeping the raw byte
arrays and freshly parsing them for every entity? Maybe we can create the entities once and then clone them every time?
I implemented the bitstream-based protobuf code in #26. There are no nested bitstreams (except a few lazy ones for instance baselines, but they really don't hurt), so the performance of UnsafeBitStream
allocations no longer matters.
Nonetheless: Thank you for sharing your ideas!
I did some tests, and it turns out that the allocating and pinning memory for UnsafeBitStream takes around 2s to complete on my system. see http://git.io/b_HBJg for my implementation
Making only two allocations per demo speeds that process up, removing 2 seconds off the total. complete with the fixes at #22, that would be around 7 seconds saved. On my system, DHW14-GF-ldlc-vs-nip-dust2 parse time goes from 10 seconds down to 3 seconds, which is a significant improvement.
The reason you need two allocations is because there is a nested reader (ReadEnterPVS). Currently, it will crash and burn if more than one demo is parsed in different threads simultaneously -- this should be fixed by attaching the properties to the DemoParser instead of a static class.
any thoughts? making this issue as a request for comments, mainly.