dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
14.82k stars 4.61k forks source link

Eliminate EncodingForwarder performance impact #7803

Closed redknightlois closed 4 years ago

redknightlois commented 7 years ago

The cost introduced by the EncodingForwarder in cases when you dont even need it is way too punishing for applications that have to deal with lots of strings and encoding. In our measurements the cost of Encoding has a non-negligible cost (to the point that if not solved we will eventually will roll our own).

This is an example of the cost: utf8

The funny fact is that on construction we do know it is a UTF8Encoding.

private static readonly UTF8Encoding Encoding = new UTF8Encoding();

The virtual call itself will probably will be tackled by the devirtualization work @AndyAyersMS is doing, but the EncodingForwarder will continue be there and it is a sizeable cost by itself.

jkotas commented 7 years ago

Sounds like we should revert the change that introduced it.

cc @tarekgh @jamesqo

tarekgh commented 7 years ago

@redknightlois just to double check, from your scenario measurement it show EncodingForward costing around 7%. is that true?

redknightlois commented 7 years ago

@tarekgh Actually is more because there are 2 calls for GetBytes and also because GetMaxBytesCount will also call EncoderReplacementFallback.get_MaxCharCount. I would estimate that the unneeded virtual calls are costing around 15%. This is a star-hot like method for us, even 1% improvement have ripple effects all over the place.

Keep in mind that these methods are usually used on small strings therefore the actual ratio between actual cost over total cost tends to be low. So the implementation can also be improved.

For context the method FindEscapePositionsIn will scan over the whole string to figure out where the escape positions are, and it is still much faster that calling GetMaxByteCount which does essentially nothing. Code: https://github.com/ravendb/ravendb/blob/v4.0/src/Sparrow/Json/Parsing/JsonParserState.cs#L78

tarekgh commented 7 years ago

OK, I am going to revert the whole EncodingForward changes.

redknightlois commented 7 years ago

I would mark a few of the virtual that could use it like GetMaxByteCount with AggressiveInlining because when devirtualized you can get good inlining opportunities for methods whose call prolog and epilog outweigh actual work instructions. @AndyAyersMS will devirtualized methods honor Aggressive inlining?

tarekgh commented 7 years ago

I would mark a few of the virtual that could use it like GetMaxByteCount with AggressiveInlining

@redknightlois what you mean by "it"? are you suggesting to keep the EncodingForward and just mark the callers with AggressiveInlining? or you mean something else?

redknightlois commented 7 years ago

The forwarder unless it can be devirtualized won't honor the aggressive inlining. You should try though. I was referring to the actual user facing methods (where appropriately)

AndyAyersMS commented 7 years ago

Devirtualization can lead to inlining but it depends on the exact patterns in the code. Roughly, when type information propagates within a method or into inlined callees, we're more likely to devirtualize and then inline. When type information passes from inlined callees back to callers we can devirtualize but not inline.

Internally, this is whether the jit devirtualizes 'early' during importation or 'late' during inlining.

In cases where the jit is able to inline after devirtualizing, the aggressive inline attribute is honored.

tarekgh commented 7 years ago

Talked offline with @AndyAyersMS and it looks would not be solid to depend now on the Devirtualization because this will depend on the code patterns.

I'll go ahead and submit the PR to revert the encoding forwarder to get rid of the extra cost mentioned in this issue. And then we can look at other places we can enhance the perf if possible later.

does this sound good?

redknightlois commented 7 years ago

Yeah.

AndyAyersMS commented 7 years ago

Can we get a perf test case out of this?

redknightlois commented 7 years ago

Sure, what do you need? Code is pretty low level I can isolate the code and send it. Would that work?

tarekgh commented 7 years ago

@AndyAyersMS we have a perf tests for Encoding in

https://github.com/dotnet/corefx/blob/master/src/System.Text.Encoding/tests/Performance/Perf.Encoding.cs

If there is specific scenario used by @redknightlois we can add extra test case there. or @redknightlois can submit PR for it there.

image

AndyAyersMS commented 7 years ago

Did we see a perf impact when the EncodingForwarder was added? Or did that happen before we had perf tests?

tarekgh commented 7 years ago

Unfortunately it looks we started collecting the perf data from last month and EncodingForwarder work done during July 2016. after we revert EncodingForwarder we can look at the new data and notice the differences

redknightlois commented 7 years ago

There is a subtle difference between the perf scenarios you have and the ones we have :)

https://github.com/dotnet/corefx/blob/master/src/System.Text.Encoding/tests/Performance/Perf.Encoding.cs#L16 Change that line to:

 int[] sizes = new int[] { 16, 32, 64, 128, 256, 512, 10000, 1000000 };

In that way you are covering both scenarios. Even if you had the perf data, the problem would be hidden in the noise. That new version will highlight the issue and avoid a regression.

tarekgh commented 7 years ago

thanks I'll do that

redknightlois commented 7 years ago

OK, I figured out why I am seeing this now and wasnt there before. The actual change was able to enter into the 1.1 release and we recently upgraded to it. Any chance for a servicing release fixing the regression? cc @jkotas

tarekgh commented 7 years ago

I have opened the PR for adding the short strings to our perf test https://github.com/dotnet/corefx/pull/17974

@gkhanna79 @danmosemsft could you help with @redknightlois question

Any chance for a servicing release fixing the regression?

@redknightlois, is there a way when I merge the fix you can build and try it locally to ensure the regression is gone and there is no other reasons causing such regressions? thanks for your help here.

redknightlois commented 7 years ago

@tarekgh Can you do a release build of a 1.1 and send it to me? You can ask @AndyAyersMS for my contact information.

danmoseley commented 7 years ago

@leecow do we have a written bar for backports to 1.1 ?

tarekgh commented 7 years ago

@redknightlois I'll try to build a private version for you tomorrow.

tarekgh commented 7 years ago

@redknightlois I have built a private runtime binaries and shared it here. https://1drv.ms/u/s!AhP2SwMuINnCjLN-REk08dErlSfKIA

These are built based on v1.1 as you have requested. please let's know what you get when you measure your scenario perf. Thanks.

redknightlois commented 7 years ago

Regression is fixed in v1.1

utf8

tarekgh commented 7 years ago

@leecow could you please advice if we can get this fix in 1.1?

leecow commented 7 years ago

Check-in deadline for the next 1.1 update is Monday. Please submit fix for review.

tarekgh commented 7 years ago

@leecow I have submitted the PR https://github.com/dotnet/coreclr/pull/10805 and sent email to the ship room so I'll wait to merge this change till having the ship room approval.