nodejs / diagnostics

Node.js Diagnostics Working Group
MIT License
537 stars 78 forks source link

Node CPU Profiling Roadmap #148

Closed yunong closed 3 years ago

yunong commented 6 years ago

We’d like to get feedback on the status of the profiling tools landscape in Node.js today. In particular -- we want to get alignment on a roadmap which will provide a free, open source, and cross-platform set of tools that are part of the node/v8 API i.e. maintained across LTS versions that can provide a powerful suite to debug and diagnose Node.js issues in production.

Production Challenges

There are some challenges that are unique to debugging and diagnosing issues in production. Specifically for large critical production deployments. In particular here are some of the constraints due to the production nature of the deployments:

  1. Generally, most solutions need to be low impact on the process. Attaching debuggers or tracers is often impractical when the application is taking production traffic. Debuggers will pause the process, causing all inflight requests to hang. Tracers and debuggers often introduce large performance penalties, which can impact customers.
  2. Tools need to be safe and not cause kernel panics or process crashes.
  3. It’s often hard to reproduce issues seen at production scale -- therefore programmatic access of tooling via APIs can help diagnose problems as they occur. e.g. start a profile based on some condition since it’s often impossible to reproduce

Profiling

One of the most useful methodologies to optimize CPU performance in a running application is by sampling the CPU stack frames (CPU profiling) and then visualizing the samples, typically using a flamegraph. This technique will show hot code paths on CPU -- which gives you the opportunity to optimize the relevant source code.

The approach can be done in production with OS level profilers such as (perf, DTrace, systemtap, eBPF) with very low overhead. The profilers lack the information to resolve the JS frames, resulting in unhelpful memory addresses for the JS frames. V8 solves this problem by dumping a mapping of native frame addresses to JS source and line number.

It’s important to mention here that having access to all stack frames, whether native (v8, libc, syscalls, libuv, native modules) or JS is important. Problems can occur anywhere in the stack, and we want to be able to profile Node with complete stack frames. E.g. We heavily use gRPC -- which is a native module -- so without access to native frames we would not be able to get visibility into this critical part of our application.

There are a few issues with this implementation:

  1. perf(1) support is now deprecated in V8 and will not be supported starting in Node 8 -- which effectively means we’re losing the ability to profile JS stacks.
  2. Lack of cross-platform tooling. Even if perf support wasn’t deprecated, this solution only works on Linux.

We’d like to contribute and collaborate on a set of comprehensive, cross-platform, and open source CPU profiling tools with the Node and V8 team. The V8 team has advised us that they plan to support the v8 profiler and the v8 cpu profiler API going forward, and we want to unlock CPU profiling capabilities in Node using these supported frameworks.

Roadmap:

  1. Determine which tools -- v8 profiler (--prof), cpu profiler or something else -- to support and maintain for profiling Node applications.
  2. Currently, these tools do not support native c/c++ frames which includes v8, native modules, libuv or syscall stack frames. Note because these tools are in process, they will never be able to show syscalls as those can only be sampled from within the kernel. Supporting native frames will require implementing a stackwalker in the v8 profiler or the cpu profiler.
  3. Determine the overhead of profiling using these tools. With the now deprecated OS level profilers, the overhead was quite low -- and thus suitable to use in a production environment without outsized impact to customers.
  4. Dynamically toggle profiling in a production environment. It’s not practical to profile all the time in production due to the performance impact -- being able to dynamically enable and disable profiling via an API would be ideal.
  5. Provide an API to programmatically consume profiling data once enabled.
  6. Add documentation so these tools can be adopted by users.

We’re looking for feedback and alignment with the community on this subject before proceeding with the design and implementation -- please let us know your thoughts.

mmarchini commented 6 years ago

Great explanation and summary of the current scenario.

Suggestion: We could add "Write tests to those profiling tools" to the roadmap, as I think one of the goals is to have these tools as first-class citizens in the future.

joyeecheung commented 6 years ago

Another suggestion: a better standardized format for the profiles, and tools that convert legacy formats to it. The current v8 CPU profile format is quite limiting considering it's JSON so it cannot be concatenated easily and has to be in the memory as a whole before being serialized.

davidmarkclements commented 6 years ago

In 0x (flamegraph tool) I've re-implemented stack tracing on top of v8 profiler (--prof) and the v8 tick processor (--prof-process --preprocess) – see https://github.com/davidmarkclements/0x/pull/99 - it's currently behind a flag but next major version this will be default with a --kernel-tracing flag for OS level tracing.

Pros

Cons

What I would love is the ability to cross reference prof stacks with OS level tracing stacks. If we can do that, then perf being deprecated, and bytecode handler problem (and any other problems) would be resolved with --prof while lower level tracing could be used to compliment --prof.

davidmarkclements commented 6 years ago

In terms of Node internal implementation, absolutely agree on tests.

The way we currently expose --prof-process isn't ideal, the stacks processor is concatenated JavaScript based on some JS files in deps/v8/tools - .. with additional injected code to make log output work with Node (see https://github.com/nodejs/node/pull/14966)

danielkhan commented 6 years ago

We recently added production grade continuous sampling to our product. For that we had to find a sweet spot between sampling frequency and duration. Additionally there was a memory leak in v8::CpuProfiler (https://github.com/nodejs/node/issues/14894) we discovered. Here is a blog post with screen shots https://www.dynatrace.com/news/blog/introducing-code-level-visibility-for-node-js/

As already mentioned, this approach won't give you native frames - still we considered it to be sufficient for many use cases.

@Hollerberg who implemented it will be at the summit and we are happy to discuss the validity of our approach and be part of an initiative to improve CPU sampling capabilities in node.

aalexand commented 6 years ago

We use the CPU profiler in the profiling agent here that is an experimental way to continuously collect the data from production Node.js apps. Some notes on that experience so far:

mhdawson commented 6 years ago

I believe it is important that Node.js have first-class tooling for problem investigation and CPU profiling is part of that along with tracing, debugging, heap dumps, node-report and being able to introspect core files. Ideally, a base level of core tooling would be available with the runtime without having to install additional components as that can often be an issue in production.

That's a long way of saying that I support your effort and will look to see how I can help in respect to getting ci testing and other supporting pieces in place and also how I might be able to pull in people from IBM to help collaborate.

mmarchini commented 6 years ago

For reference: https://github.com/nodejs/diagnostics/issues/150

hashseed commented 6 years ago

Another issue that @bmeurer mentioned to me is that there is no way for Linux perf and any other perf tool outside of V8 to discern inlined functions from its caller, since both occupy a single stack frame. The CpuProfiler doesn't provide this either, but @franzih is addressing this.

mmarchini commented 6 years ago

Updates from the Diagnostics Summit

External profilers

External profilers are working well on Node 6, even though they are not officially supported. They are also working on Node 8+, but the information collected by them can be misleading due to the introduction of Turbofan and Ignition. Also, the current method used by most external profilers to resolve JITed functions (--perf-basic-prof) is not officially supported on V8 and might break/be removed in the future.

To support external profilers in the future, we need two things:

Interpreted Frames

After the introduction of Turbofan and Ignition, Interpreted Frames on the stack don’t reflect JavaScript function calls since only the interpreter appears in the stack, as we can see in the image below.

image

As a consequence, understanding the data collected by external profilers can be tricky, and the data can be misleading since there's no way to distinguish between different JavaScript function calls when they are running in interpreted mode. As soon as those functions are compiled and optimized by Turbofan, they will appear in the stack as before.

During the Summit, we came up with three different approaches to have more meaningful information on the call stack for external profilers. All of them must be implemented on the V8 interpreter, and they basically change the current flow to add a unique stack frame for each JS function.

Intermediate function before calling the interpreter

Add an intermediate frame to the stack which points to a JIT-function with the purpose of keeping track of which JS function is being called.

image

Duplicate the interpreter code for each JS function

Copy the InterpreterEntryTrampoline code for each Interpreted Function, this way each Interpreted Frame will have a unique entry point. Apparently, ChakraCore is implemented this way.

image

Change the stack on runtime to replace the interpreter with a unique address representing the JS function

Hack the call stack at runtime, replacing InterpreterEntryTrampoline's return address with the address to a small JIT function which will return to InterpreterEntryTrampoline later.

image

API with information to resolve JIT function addresses

External profilers can’t resolve names for JITed functions (including V8 Builtins) without help from the runtime. Also, most of the time those names are resolved with post-processing tools. Today we have --perf-basic-prof, which is used by a variety of tools to post-process the output from profilers and extract useful information from that.

As --perf-basic-prof is very limited and is not officially supported by V8, the suggestion is to create an API which listens to code creation events and expose all information needed by external tools to resolve names for JITed functions.

V8 Profiler & CpuProfiler

V8 builtin profilers are officially supported, but they can only sample JavaScript frames. We discussed the possibility to add native frames to those profilers as well in the future. Some challenges are 1) sampling frames from other threads; 2) sampling syscalls frames.

Roadmap

mmarchini commented 6 years ago

Related issue: https://github.com/nodejs/diagnostics/issues/150

mhdawson commented 6 years ago

Thanks for the great write up of the discussion from the summit.

ofrobots commented 6 years ago

OP said:

perf(1) support is now deprecated in V8 and will not be supported starting in Node 8 -- which effectively means we’re losing the ability to profile JS stacks.

AFAIK, perf(1) support is not deprecated. Like any of the other profiling APIs, it doesn't come with an official support expectation from the V8. I do not think this perf support is going anywhere anytime soon.

yunong commented 6 years ago

@ofrobots the decision deprecating perf(1) support came from discussions with @bmeurer, @hashseed, and @fhinkel. If this has changed then great -- but I'd like to just ensure we're all aligned.

hashseed commented 6 years ago

Let me try to rephrase this: perf-based tools are not officially supported because they were never intended for public use. As result, they did not get any attention when we launched TurboFan and Ignition in V8 5.8.

That being said, they worked fine for a long time before 5.8 and if fixed, I expect that they will stay functional for a long time. I don't expect major changes like TurboFan/Ignition to occur often.

mmarchini commented 6 years ago

In addition to what @hashseed said:

I expect that they will stay functional for a long time

The time span here is in years since a new compiler/interpreter won't replace Turbofan/Ignition overnight. Also, it won't happen by surprise: we knew Turbofan and Ignition were coming years before they landed on Node.js core.

I don't expect major changes like TurboFan/Ignition to occur often.

I believe when this happens it will take the same path to upgrade done in the past: the new compiler/interpreter is introduced in the codebase, and the old one is replaced in steps until it can be removed. That gives us time to help to make the new compiler/interpreter work well with external profilers (especially if we want to provide 1 Tier support for external profilers in Node.js, which I think we should).

mhdawson commented 6 years ago

@hashseed so is the recommended path forward from the V8 team to use the perf-based tools as opposed to creating an new API? If so is there some level of commitment to keeping them working (along the lines of what we have now with V8 running Node.js tests ?)

hashseed commented 6 years ago

I recommend using the CpuProfiler, since this is the one that both Chrome DevTools and GCP Stackdriver uses. But I can see that the current feature set makes it unattractive in some use cases.

Perf-based tools are not designed with public use in mind, and I cannot make hard commitment to support them in the long run. What I want to avoid is to be in the situation to have to block changes to V8 because they may break perf-based profiling in some way. I don't expect this to happen soon or often though. Also, the breakage we are talking about, introduced by Ignition, is something I hear no other popular interpreted language runtime supports either.

That being said, we currently don't even have any test case for perf-based profiling at this point, again due to the nature of them being ad-hoc tools not designed for public use. I would welcome test cases in Node.js so that we could at least notice if they break, so that we can make informed decisions whether to fix them if effort to do so is reasonable. The current situation is that we cannot honestly claim to offer official support if we wanted to.

hekike commented 6 years ago

CPU Profiling deep dive WG meeting

Time

UTC Thu 15-Mar-2018 19:00 (07:00 PM):

Timezone Date/Time
US / Pacific Thu 15-Mar-2018 12:00 (12:00 PM)
US / Mountain Thu 15-Mar-2018 13:00 (01:00 PM)
US / Central Thu 15-Mar-2018 14:00 (02:00 PM)
US / Eastern Thu 15-Mar-2018 15:00 (03:00 PM)
London Thu 15-Mar-2018 19:00 (07:00 PM)
Amsterdam Thu 15-Mar-2018 20:00 (08:00 PM)
Moscow Thu 15-Mar-2018 22:00 (10:00 PM)
Chennai Fri 16-Mar-2018 00:30 (12:30 AM)
Hangzhou Fri 16-Mar-2018 03:00 (03:00 AM)
Tokyo Fri 16-Mar-2018 04:00 (04:00 AM)
Sydney Fri 16-Mar-2018 06:00 (06:00 AM)

Agenda

Flarna commented 6 years ago

@hekike I expect this deep dive is focusing only on external profiling (e.g. via perf,..) - not in-process profiling (e.g. using V8 CpuProfiler APIs).

mhdawson commented 6 years ago

I'm travelling so I won't be able to make that time. Any chance we could do it Thurs/Friday instead?

mike-kaufman commented 6 years ago

@hekike - a little confusion in your time table - date says Mar 07 not Mar 14. Also, please note that this sunday (3/11) is when we move clocks forward for DST in US & Canada, so we'll be UTC -07:00. This will affect some times above.

hekike commented 6 years ago

@Flarna CpuProfiler APIs is an optional future topic, I'm not sure we will have time for it, so I think your expectation is correct.

@mhdawson I would be okay with either Thursday or Friday, would it be okay for you @mmarchini and least for one person from the v8 team? (cc @hashseed @fhinkel).

@mike-kaufman sry, my bad, I used the table from the last WG meeting, how can I generate such a nice markdown timetable?

hashseed commented 6 years ago

I think I can join. With the DST change, what time are we talking about here?

mmarchini commented 6 years ago

Both Thursday and Friday work for me. I can give an overview of the current situation at the start of the meeting (challenges, proposed implementation & demo).

fhinkel commented 6 years ago

I think 7 pm UTC is 8 pm Berlin, 3 pm East Coast. Works for me this Wednesday and Thursday.

mike-kaufman commented 6 years ago

how can I generate such a nice markdown timetable?

@hekike - see here

mhdawson commented 6 years ago

Friday @ 7pm UTC should work for me as well on Friday.

hashseed commented 6 years ago

Call me old-fashioned, but I strongly prefer Thursday evening over Friday evening :)

hekike commented 6 years ago

Okay, as I see Thursday works for everyone and it's the preferred.

UTC Thu 15-Mar-2018 19:00 (07:00 PM):

Timezone Date/Time
US / Pacific Thu 15-Mar-2018 12:00 (12:00 PM)
US / Mountain Thu 15-Mar-2018 13:00 (01:00 PM)
US / Central Thu 15-Mar-2018 14:00 (02:00 PM)
US / Eastern Thu 15-Mar-2018 15:00 (03:00 PM)
London Thu 15-Mar-2018 19:00 (07:00 PM)
Amsterdam Thu 15-Mar-2018 20:00 (08:00 PM)
Moscow Thu 15-Mar-2018 22:00 (10:00 PM)
Chennai Fri 16-Mar-2018 00:30 (12:30 AM)
Hangzhou Fri 16-Mar-2018 03:00 (03:00 AM)
Tokyo Fri 16-Mar-2018 04:00 (04:00 AM)
Sydney Fri 16-Mar-2018 06:00 (06:00 AM)
mmarchini commented 6 years ago

Upstream CL to make interpreted frames distinguishable by external profilers: v8:959081

mike-kaufman commented 6 years ago

@hekike - is there a link for the deep-dive hangout?

edit - nm, I see this is now on 3/15.

hekike commented 6 years ago

Not yet. I'll share it here on tomorrow before the meeting.

hekike commented 6 years ago

Hangouts link: https://meet.google.com/gie-wetm-ufy Docs: https://docs.google.com/document/d/1e5rKT48tpEKttEZuUglbXNVPnwbwtZbWrNEKgfGLjmo/edit?usp=sharing

See you soon!

mmarchini commented 6 years ago

Reminder: The meeting will start in a few minutes.

mmarchini commented 6 years ago

I played with the idea of introducing native frames to --prof and v8::CpuProfiler, but had a few setbacks (especially with v8::CpuProfiler). A draft of the implementation can be found here: https://github.com/nodejs/node-v8/commit/f6d954979fca10ae7feb329f8701de8087408fd3

It works quite well with --prof because its output format will have all the sampled stack addresses for each tick, along with the information necessary to resolve V8-generated symbols and with information about linked shared libraries to resolve native symbols. Since the output of --prof is processed after the sampling is over, resolving native symbols is quite easy.

On the other hand, making it work with the v8::CpuProfiler is not trivial, because v8::CpuProfiler resolves symbols during execution time instead of lazily - like --prof or Linux perf. It also drops any address it can't resolve, and since V8 has no knowledge of Native Symbol names, it will drop all native frames, even though it uses the same backend (v8::SafeStackIterator) used by --prof. To solve this, we need to either: allow lazy-resolving symbols when necessary or teach V8 to resolve C++ symbols.

The first approach is probably more performant, but requires us to change the output format of the v8::CpuProfiler to also give information about unresolved frames (and maybe about linked shared libraries), but we need to know if there's interest on the V8 side to introduce something like that. We should probably align with ChromeDev Tools team as well since they use the .cpuprofile format.

The second approach would need us to either implement an Elf reader (and equivalent on Windows) inside V8 or call an external program (like nm) to resolve native symbols. We could use libelf for that, but I'm worried about portability issues here.

Honestly, I don't know which approach we should take, or even if it's a good idea to introduce native frames to v8::CpuProfiler. Ideas are appreciated. @hashseed @fhinkel @ofrobots WDYT?

mcollina commented 6 years ago

@mmarchini if we could turn on and off --prof at runtime, I think it would be very nice.

mmarchini commented 6 years ago

@mcollina I agree, but I think --prof is not officially supported.

misterdjules commented 6 years ago

@hashseed

Another issue that @bmeurer mentioned to me is that there is no way for Linux perf and any other perf tool outside of V8 to discern inlined functions from its caller, since both occupy a single stack frame. The CpuProfiler doesn't provide this either, but @franzih is addressing this.

The link you provided (https://bugs.chromium.org/p/v8/issues/detail?id=7203) is not public. Is there a way to make this public?

Thanks!

mmarchini commented 6 years ago

@misterdjules v8/v8:7203 is public now.

misterdjules commented 6 years ago

@mmarchini Thank you! FWIW, and for others hitting the same issue: the link above broken, the correct link is https://bugs.chromium.org/p/v8/issues/detail?id=7203.

mike-kaufman commented 6 years ago

@mmarchini - can this be closed now?

mmarchini commented 6 years ago

@mike-kaufman let me double check first. Might be worth keeping it open (or opening another issue) to discuss improvements to V8 CpuProfiler, but I don't think we need to keep it in the agenda anymore.

gireeshpunathil commented 4 years ago

should this remain open? [ I am trying to chase dormant issues to closure ]

rochdev commented 4 years ago

Was this discussion ever resolved? It's unclear from reading this thread whether there is a current recommendation and/or roadmap for production profiling.

In my own tests, I found the following:

Is there still a plan to have a production-ready profiler available for Node?

mmarchini commented 4 years ago

Since the only way to get profiles at runtime is to start/stop

That's incorrect, both the C++ CpuProfiler API and the CpuProfiler inspector domain allow you to take profiles on demand (with no need to stop the process). This is not well documented though, so I understand the confusion, and that's something we need to improve. The overhead is supposedly higher than perf, but still acceptable. I haven't seen a noticeable overhead when using it (at least not on current Node.js versions). Some folks here have expressed concerns wrt performance of V8 CpuProfiler before, but if that's still an issue we need reproducible code to share those concerns with V8, otherwise the situation will not improve.

--prof only outputs when the process is stopped, making it unsuitable for production.

Agreed, prof is unsuitable for production since you can't start it on demand, but it's worth noting that V8 does not support this flag officially.

perf seems to be the only viable solution right now, but it's not officially supported and only available on linux.

You're right, perf is the only system profiler we have examples on how to use today, and yes, it's not officially supported by V8, but we try a best-effort approach to keep it working (it's been working since v10 LTS, and since then we were able to fix most issues on V8 before they reached Node.js). I know some folks on Microsoft have successfully used the --interpreted-frames-native-stack flag with some Windows profiler to profile V8, so it should also be possible to do so on Windows and other operating systems, but someone interested in these platforms need to do the work to ensure it works and is tested (just like it happened with Linux).

Is there still a plan to have a production-ready profiler available for Node?

The biggest problem is that any solution for a "production-ready profiler" needs to be implemented and supported by V8. The current recommendation from V8 is to use the inspector protocol CpuProfiler. For folks who need native frames as well, I generally recommend trying perf, with the caveats you mentioned.

With that being said, it's worth noting that languages like Python don't have an officially supported production-ready profiler which also shows native frames. Java also doesn't have a officially supported, but perf is available with similar support level and caveats we have on Node.js/V8. Native-ish languages like Go and Rust might have better support (since they are closer to native code than JIT and interpreted languages), but I don't have enough knowledge of those environments.

rochdev commented 4 years ago

That's incorrect, both the C++ CpuProfiler API and the CpuProfiler inspector domain allow you to take profiles on demand (with no need to stop the process)

I meant you have to stop the profiler to get the profile, not the process, sorry for the confusion. The main overhead in my tests came from v8::CpuProfiler::StartProfiling blocking the event loop. While over say 30 seconds this may disappear completely in the average, it still can cause a massive spike for any requests happening while StartProfiling is running.

Since the only way to capture a profile is to call StopProfiling, it means after capturing the profile that the profiler needs to be started again, causing another spike.

but if that's still an issue we need reproducible code to share those concerns with V8

Simply measuring the call to StartProfiling gives me a good 15-20ms. The max latency when using autocannon with a Hello World goes from 10-30ms to 200-400ms.

The biggest problem is that any solution for a "production-ready profiler" needs to be implemented and supported by V8.

Yes and this is my main concern since the profiler provided by V8 has a performance impact and doesn't support native frames.

Anyway, thanks a lot for the explanation! (and your work on node-linux-perf :)

mmarchini commented 4 years ago

I meant you have to stop the profiler to get the profile, not the process, sorry for the confusion.

I see. Do you see another use case for getting the result without stopping besides avoiding the start overhead? I'm wondering if it makes sense to propose a new API if we can fix the startup overhead.

The main overhead in my tests came from v8::CpuProfiler::StartProfiling blocking the event loop.

Yes, I heard that before but I haven't seen it in practice yet (but I didn't measure it closely either).

Simply measuring the call to StartProfiling gives me a good 15-20ms. The max latency when using autocannon with a Hello World goes from 10-30ms to 200-400ms.

Will do, thanks, this is super helpful :). How are you calling StartProfiling, via C++ or inspector protocol (or which module are you using)?

vmarchaud commented 4 years ago

Maybe i'm wrong but wouldn't using the inspector protocol would mitigate the startup overhead since it's a seperate thread (and not impacting the event loop) ?

Flarna commented 4 years ago

Maybe of interest: https://github.com/nodejs/node/issues/23070#issuecomment-429425092 I have not retried with up to date node version but I assume not that much has changed since that time.

We have seen a high RSS overhead during StartProfiling and as far as I remember it was also blocking the even loop quite long. Reason was that for each existing function some info must be copied into the profiler thread during StartProfiling. Profiling itself is efficient as it is in a different thread. Main issue for us was that we could not predict nor limit how big the overhead is and the result were that OOM killer on some cloud platform terminated the apps.

For a user which fully knows his app and controls when profiler runs this is maybe no issue because the user can adapt it's setup. But for a monitoring tool it's quite a pain that apps stop working just because the tool is added.