microsoft / typed-rest-client

Node Rest and Http Clients with typings for use with TypeScript
Other
675 stars 118 forks source link

Performance issues with readBody in HttpClient.ts #321

Closed Axosoft-Ashley closed 2 years ago

Axosoft-Ashley commented 2 years ago

Environment

Node version: 14.18.3 Npm version: 6.14.5 OS and version: Mac 10.15.7 typed-rest-client version: ^1.8.4

Issue Description

Buffer.concat being called on repeat inside readBody in HttpClient.ts is causing electron UI to freeze. NOTE: we do have a incomming PR on the way, we collect the chunks into an array and build the Buffer with a single concat in the 'end' phase of the readBody() this has been tested and proved to significantly improve performance!

Expected behaviour

Electron client should still be quick.

Actual behaviour

Electron client freezes and high CPU usage while calling an api on loop

Steps to reproduce

  1. Have electron (version shouldn't matter)
  2. use npm package azure-devops-node-api to call GitApi getRepositories (it uses typed-rest-client to call its rest api)
  3. we found the issue by having 1k+ projects for an azure account then looping through getRepositories to fetch them all.
        return Promise.all(
          fp.map((project) => {
            return _gitApi.getRepositories(project.id, includeLinks, includeAllUrls);
          }, projects)
        );
  4. We found it to be significantly inefficient and dug deeper to find Buffer.concat was killing performance

Logs

profiler logs:

image

profiler logs zoomed in:

image
AndreyIvanov42 commented 2 years ago

Hi @Axosoft-Ashley thanks for reporting! We'll look what we can do.

Mr-Wallet commented 2 years ago

Hi, I'm a colleague of Axosoft-Ashley. What did you think of the solution she described about gathering up chunks and doing a single Buffer.concat? We would be happy to open a pull request ourselves with our solution.

The only downside I can see is that it doesn't allow the garbage collector to free up the chunks while the stream is still going, but in practice that doesn't seem like a very large amount of additional RAM for very long (at least for our use case, i.e. the Azure DevOps REST API). We might be able to do even better with a buffer which doubles in size as it runs out, but that's a lot more complicated (either in implentation or by bringing in a package like concat-stream). We didn't want to over-engineer for our particular issue. Given the lack of other issues in this repo's history it seems like this solution is probably sufficient. Please let us know your thoughts.

AndreyIvanov42 commented 2 years ago

Thanks all for the help!