Closed ghost closed 6 years ago
@nodejs/v8
I'd check whether this is fixed with newer V8 and bisect for the fix.
I could reproduce. Then I wanted to run it again with node master but there is a dependency that uses a V8 API that doesn't exist anymore
I commented on the Rocket Chat issue. From the debug build output it looks like API abuse to me.
I was wrong. This rather looks like the parser is running on a thread it is not supposed to be on.
If you have a patch at some point, i'm happy to to test.
I've seen the thread ID error when running a debug build of Node and using https://github.com/laverdet/node-fibers (which I believe RocketChat uses, since it's a Meteor app?):
# Fatal error in ../deps/v8/src/parsing/parser.cc, line 542
# Debug check failed: ThreadId::Current().Equals( outer_scope_info->GetIsolate()->thread_id()).
Specifically, the fibers
library implements coroutines, which operate in a way that looks like cooperative multithreading from the perspective of V8 (even though fibers
does not actually spawn threads), so any parts of Node that are not prepared to deal with multiple (cooperating, non-overlapping) threads can get confused.
V8 itself is designed to tolerate multiple threads as long as you're careful to lock and unlock the Isolate
object, so the fibers
library isn't doing anything forbidden or dangerous as far as V8 is concerned, though Node is a little less thread-aware than V8 in some areas.
While the ThreadId::Current().Equals(...)
assertion failure is interesting, I don't think it's the root of the problem, since the segmentation fault still happens with non-debug builds of Node, where this check is omitted. I can provide a patched version of the fibers
library that does not fail this assertion, if that's helpful. I have seen segmentation faults with Node 8.10.0 running locally with this patched version of fibers
, though, so I think something else must be going wrong.
node-fibers mucks around with thread-local storage to trick V8 into believing it's running on multiple threads. Some recent change probably broke that hack but that's ultimately a node-fibers issue.
umm... I'm shocked.
Just to be clear: we have not pinned this down to fibers yet, though that seems possible, and of course those of us who rely on fibers are willing to put in work to keep them compatible with Node and V8. Let’s wait and see what the real problem is before assuming the solution will be unwelcome to anyone?
As soon as we know this is a fibers issue for sure, I definitely support moving the discussion over to that repo!
Definitely support finding out the root cause. I'm just expressing my surprise that node-fibers would manipulate TLS.
In principle, fibers could be implemented by actually spawning new threads, and then there would be nothing weird about how it works. I agree there’s some trickery involved in getting V8 to think there are multiple threads involved, without incurring the full overhead of spawning threads, and if those tricks are the source of this problem, then of course that’s a fibers problem. Those ticks have worked fairly well for several years, so I’m confident we can find a solution that involves fewer tricks (including just spawning real threads, if necessary).
Maybe someone can bisect the changes between 8.9 and 8.10 to see what introduced the issue to surface?
@hashseed Yep! We (@abernix and I) will report back when we’ve bisected it down (we have a reliable repro).
@benjamn has anyone reported similar problems on 9.x? It seems odd that this is just showing up now
@mylesborins If this is a meteor-specific problem (fibers-related or otherwise), then it’s relevant to note that each version of meteor ships with a specific (stable) node version, and 8.9.4 is the most recent version we’ve shipped (in meteor 1.6.1). You could run an unreleased build of meteor with Node 9, but that’s uncommon.
@benjamn with that in mind we should try and figure out how to get some of the meteor stuff into CITGM so we can find these problems early 😄
@hashseed It took a little bit of dancing around and shuffling commits to avoid some of the surrounding un-buildable commits, but this bisects down to https://github.com/nodejs/node/commit/bede7a3cfa884c67cbb5d804a1a4f130bea4ac38 (deps: update V8 to 6.2.414.46
).
Is there a documented process somewhere as to how this larger V8 update commit is formed? Is it essentially a diff
between the current state of deps/v8
and that exact revision of V8? (Any advice on how to proceed within that commit would be greatly appreciated, but that seems like my next step.)
@abernix thanks for bisecting!
It seems that the commit updated V8 from 6.1.534.51
to 6.2.414.46
. Bisecting across patch levels probably will cause issues, so I think it's best to look between 6.1.534
and 6.2.414.46
.
In a V8 check out I get:
git log 6.1.534..6.2.414.46 --oneline | wc -l
1043
So bisecting about 10 steps in V8 should give you the culprit. To do so, check out V8 and update Node.js like this on every bisect step
v8/tools/release/update_node.py $V8_DIR $NODE_DIR
Then build and try to repro.
After @abernix bisected Node between v8.9.4
and v8.10.0
to confirm that the V8 6.2 update introduced the problem, and I bisected V8 between 6.1.534
(the version used in Node 8.9.4) and 6.2.414
(the version used in Node 8.10.0) to discover that the problem began happening between 6.2.145
and 6.2.146
, we found a single change among the 42 commits in that range that pretty conclusively causes the segmentation faults.
I say "conclusively" because I can rebuild Node 8.10.0 with just this one commit reverted, and the problem disappears in every case where we were previously able to reproduce it.
Here's the final output of git bisect log
after running git bisect start 6.2.146~1 6.2.145~1
:
# bad: [28f25699ab2395324a425c3cb07ade53c79de322] [parser] Various cleanup for async function parsing
# good: [fa53a0dae76b186d99259bf775934a6623162796] [wasm] Fix API prototype chains
git bisect start '6.2.146~1' '6.2.145~1'
# good: [9735d7f1095efa3be8a289937d70d93bab7b24ad] [wasm] Fix link error messages to be more indicative of the actual error
git bisect good 9735d7f1095efa3be8a289937d70d93bab7b24ad
# good: [448a1d4bb513613d8c39d2e4eafbb2642602e651] [ic] Drop Array constructor support from CallIC.
git bisect good 448a1d4bb513613d8c39d2e4eafbb2642602e651
# bad: [ea0e1e21ecc13884302e0c77edad67659f2e68b4] Fixing failure on GC stress.
git bisect bad ea0e1e21ecc13884302e0c77edad67659f2e68b4
# good: [e91b96922efbcdc72db3684df8d065d560bfa900] [Compiler] Move construction of CompilationInfo into GenerateUnoptimizedCode
git bisect good e91b96922efbcdc72db3684df8d065d560bfa900
# good: [fd87a3c4236ed5bef4252818e40a38f020cdf671] [wasm] Remove redundant parameter
git bisect good fd87a3c4236ed5bef4252818e40a38f020cdf671
# first bad commit: [ea0e1e21ecc13884302e0c77edad67659f2e68b4] Fixing failure on GC stress.
In other words, the offending commit appears to be https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4, which (as @abernix points out), was originally intended to fix a problem introduced by https://github.com/v8/v8/commit/e15f554427beb65c921c80a12f03df18aa90e35a, though that commit was later reverted by https://github.com/v8/v8/commit/a193fde97ca9a6b0994cb79f556825b76bebd07b, and has not successfully re-landed since then.
Since the problem that https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4 fixed is no longer present, arguably https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4 itself could be (should be?) reverted in V8 and then cherry-picked into Node, even if we don't know exactly why it caused this particular problem. See below for a plausible theory, but I'll be the first to admit it's probably not entirely correct. Nevertheless, I think we can reach the same conclusion without getting wrapped up in a debate about the theory.
We believe that https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4 should be reverted, and the revert commit should be cherry-picked into Node's copy of V8, and then hopefully released in Node v8.10.1.
Though the use of fibers
in Meteor certainly does exacerbate this problem (see below for some ideas about why this makes sense), we do not know of any changes that could be made to https://github.com/laverdet/node-fibers or Meteor that would fix this problem.
If this recommendation seems reasonable, what's the best way to proceed? Specifically, should I submit a V8 PR/CL based on 6.2.414.42, or the current version (6.7.106, much more recent)?
Content warning: leaps of logic, speculation, hand-waving 👋
Taking a closer look at the commit in question, I think there are a few observations we can make:
commit ea0e1e21ecc13884302e0c77edad67659f2e68b4
Author: Juliana Franco <jupvfranco@google.com>
Date: Fri Aug 4 10:45:33 2017 +0200
Fixing failure on GC stress.
This bug was introduced by the CL
https://chromium-review.googlesource.com/c/586707
With these changes we make sure that the object being deoptimized
does not point to code objects that have been already collected.
The CL https://chromium-review.googlesource.com/c/596027 did not
fix this problem because we were only invalidating embedded objects
reachable from the stack, however it is possible that there are some
dangling references in objects not on the stack. Thus we consider
all the optimized code objects that are marked for deoptimization.
Bug: v8:751825
Change-Id: I3a6410c2bf556fa254c54a25e1f49d7356b9e51d
Reviewed-on: https://chromium-review.googlesource.com/601967
Commit-Queue: Juliana Patricia Vicente Franco <jupvfranco@google.com>
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Cr-Commit-Position: refs/heads/master@{#47163}
diff --git a/src/deoptimizer.cc b/src/deoptimizer.cc
index 2f5657e82e..3e90127db3 100644
--- a/src/deoptimizer.cc
+++ b/src/deoptimizer.cc
@@ -301,71 +301,70 @@ void Deoptimizer::DeoptimizeMarkedCodeForContext(Context* context) {
safe_to_deopt_topmost_optimized_code = safe_if_deopt_triggered;
}
}
}
#endif
// Move marked code from the optimized code list to the deoptimized
// code list.
// Walk over all optimized code objects in this native context.
Code* prev = NULL;
Object* element = context->OptimizedCodeListHead();
while (!element->IsUndefined(isolate)) {
Code* code = Code::cast(element);
CHECK_EQ(code->kind(), Code::OPTIMIZED_FUNCTION);
Object* next = code->next_code_link();
if (code->marked_for_deoptimization()) {
+ // Make sure that this object does not point to any garbage.
+ code->InvalidateEmbeddedObjects();
if (prev != NULL) {
// Skip this code in the optimized code list.
prev->set_next_code_link(next);
} else {
// There was no previous node, the next node is the new head.
context->SetOptimizedCodeListHead(next);
}
// Move the code to the _deoptimized_ code list.
code->set_next_code_link(context->DeoptimizedCodeListHead());
context->SetDeoptimizedCodeListHead(code);
} else {
// Not marked; preserve this element.
prev = code;
}
element = next;
}
// Finds the with activations of codes marked for deoptimization, search for
// the trampoline to the deoptimizer call respective to each code, and use it
// to replace the current pc on the stack.
for (StackFrameIterator it(isolate, isolate->thread_local_top()); !it.done();
it.Advance()) {
if (it.frame()->type() == StackFrame::OPTIMIZED) {
Code* code = it.frame()->LookupCode();
if (code->kind() == Code::OPTIMIZED_FUNCTION &&
code->marked_for_deoptimization()) {
// Obtain the trampoline to the deoptimizer call.
SafepointEntry safepoint = code->GetSafepointEntry(it.frame()->pc());
int trampoline_pc = safepoint.trampoline_pc();
DCHECK_IMPLIES(code == topmost_optimized_code,
safe_to_deopt_topmost_optimized_code);
// Replace the current pc on the stack with the trampoline.
it.frame()->set_pc(code->instruction_start() + trampoline_pc);
-
- // Make sure that this object does not point to any garbage.
- code->InvalidateEmbeddedObjects();
}
}
}
}
Because this change is so simple, it seems reasonable to conclude that code->InvalidateEmbeddedObjects()
must be getting called more often than before.
What could go wrong if code->InvalidateEmbeddedObjects()
is called at an inappropriate time? Judging just from the name, I would guess that more embedded objects could be invalidated. By itself, that doesn't sound like a problem, as long as the invalidation logic is sound, so we need to consider why https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4 might lead to unsound invalidation.
As @abernix suggested in this comment, the original call to code->InvalidateEmbeddedObjects()
was protected by the scoped StackFrameIterator it(isolate, isolate->thread_local_top)
object, which means (among other things) that the body of the loop would not be executed when isolate->InContext()
returns false
. Reasoning: each StackFrameIterator
has an AsyncStackTrace
member, and AsyncStackTrace::capture
doesn't capture any frames if isolate->InContext()
is false.
More generally, moving code->InvalidateEmbeddedObjects()
out of the stack frame loop relaxes any assumptions that were previously enforced by the StackFrameIterator
. In particular, since isolate->context()
returns a thread-local pointer, my tentative theory is that isolate->InContext()
is supposed to return false
while execution is happening on a different thread, which previously prevented the invalidation from happening in those cases.
This theory is relevant to our previous fibers
discussion, but not limited to just fibers
. We believe any multithreaded program could benefit from reverting https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4.
In the reproduction we used for bisecting, almost every core dump stack trace included this line:
if (env_->tick_callback_function()->Call(process, 0, nullptr).IsEmpty()) {
This Call
ultimately calls into JavaScript through generated assembly code, to invoke the process._tickCallback
function to process any nextTick
callbacks.
The process tended to crash while calling into native code here, which is consistent with a theory that the process._tickCallback
function object (or some other object involved in calling process._tickCallback
) had been mistakenly invalidated/collected by the GC.
Ignoring issues of correctness, the new code is doing more work than it previously did, because code->InvalidateEmbeddedObjects()
is called more frequently, which could be related to the performance regression flagged in https://github.com/webpack/webpack/issues/6767 and https://github.com/nodejs/node/issues/19444.
That's pure speculation on my part (even more so than other parts of this comment)… but it couldn't hurt to do less invalidation work, right?
Many thanks to @hashseed for his bisection workflow recommendation, @abernix for working with me to narrow the problem down, Ninja for speeding up Node builds considerably, and my partner for putting up with a weekend of me babysitting this epic bisection.
I'm not familiar with the code, but the suggestion sounds reasonable.
I have a theory.
Deoptimizer::DeoptimizeMarkedCodeForContext
deoptimizes optimized code that we previously marked. This roughly involves wiping references from code to objects so that they can be garbage collected (Code::InvalidateEmbeddedObjects
), and overwriting the return address to jump to the deopt sequence.
Prior to https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4, we would only invalidate code on the current stack. After it, we would invalidate all code from the same context.
In cases where V8 is embedded with v8::Locker
and the same isolate runs on several threads and has more than one stack, this means that we could now also be invalidating code from other threads. However, we still only patch the return addresses to the deopt sequence only for code on the current thread.
So to reproduce this bug, what needs to happen is:
Deoptimizer::DeoptimizeMarkedCodeForContext
.undefined
in B where we do not expect it.So we should definitely revert that change. Thanks for the bisect!
@bmeurer does that sound correct to you?
Filed V8 issue https://bugs.chromium.org/p/v8/issues/detail?id=7573
Sounds reasonable to me, but @jaro-sevcik should definitely have a look here.
For the record, I doubt that this is causing the performance regressions between 8.9 and 8.10.
@hashseed's analysis sounds right, but the suggested fix does not. The fix for the multi-threaded stack walk problem landed later: v8/v8@596d55adf60ccea580692206acf2f02676e569b2. Looking at that patch, it should not be hard to back-merge it.
For completeness, I doubt reverting v8/v8@ea0e1e21ecc13884302e0c77edad67659f2e68b4 will improve things because it does not do anything about the stack walks. Without that patch we were failing GC stress, so it is likely the revert will make things worse (there is even a test case for the issue being fixed - v8/v8@aaac2f8e666aba46e24d427a5f466d10e0b48545 - you might want to try it out with v8/v8@ea0e1e21ecc13884302e0c77edad67659f2e68b4 reverted). I also believe that v8/v8@e15f554 was successfully re-landed - see v8/v8@895c321948da03e4c2f7fbc1f7035e47bb709d7d.
As for the performance implications of this code - the invalidation should not be more expensive than the patching we used to do before v8/v8@895c321, so I doubt this is causing any regression. I do not think this code should ever get hot because the cost is minuscule compared to the cost you paid for optimizing the function. The only scenario where this could perhaps get hot is during some big lazy deopt storm. If that is happening, you have other problems (since you will have to pay much more for optimizing those functions again after the deopt storm).
@jaro-sevcik I was able to cherry-pick both https://github.com/v8/v8/commit/f0acede9bb05155c25ee87e81b4b587e8a76f690 and https://github.com/v8/v8/commit/596d55adf60ccea580692206acf2f02676e569b2, without reverting https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4, and I can confirm that those commits also prevent the segfault problem. Given how much development that has happened since 6.2.146, I am happy to defer to this solution, rather than blindly reverting https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4. Your expertise here is deeply appreciated!
@jaro-sevcik & others: Is https://github.com/v8/v8/commit/f0acede9bb05155c25ee87e81b4b587e8a76f690 safe to back-port? I ask mostly because applying that commit first makes https://github.com/v8/v8/commit/596d55adf60ccea580692206acf2f02676e569b2 apply cleanly, given that they both touch src/deoptimizer.cc
.
@nodejs/v8 do you have a recommendation for what we should do to fix this? I'd like to see this included in the v8.10.1-rc
I think v8/v8@f0acede is a bit risky to backport. As far as I remember it broke coverage (fixed by v8/v8@8303dc531b54bdb1ee979111d87715698ef20382), and quite possibly other things. In my opinion, it is safer to back merge v8/v8@596d55a manually - as far as I can see, the conflict is just in touching adjacent code, not really in the code being merged. Also, v8/v8@596d55a is a small local fix that only touches the deoptimizer stack walk. v8/v8@f0acede is much more involved (but it would also come with some perf boost - see our blog post that describes that change).
I am not sure I made one point clear enough: reverting v8/v8@ea0e1e2 is a bad idea - without that fix you will end up with inconsistent heap even for applications that do not use fibers. This is illustrated by the test case referenced above (v8/v8@aaac2f8).
@jaro-sevcik Thanks for your eyes on this, and for making it clear that reverting https://github.com/v8/v8/commit/ea0e1e2 is a bad idea.
I had missed https://github.com/v8/v8/commit/895c321 (which is indeed a re-land of https://github.com/v8/v8/commit/e15f554!) when we (@benjamn and I) had made the recommendation to revert https://github.com/v8/v8/commit/ea0e1e2. I had therefore seen https://github.com/v8/v8/commit/ea0e1e2 as an unwanted artifact, and that was somewhat supported by the fact that reverting it did fix the problem in our own reproduction of the fault (we had not run the deoptimize-lazy-weak.js
test in https://github.com/v8/v8/commit/aaac2f8).
It seems the recommendation here is to backport https://github.com/v8/v8/commit/596d55adf60ccea580692206acf2f02676e569b2.
Either @benjamn or I would be happy submit that PR (following Backporting to abandoned branches, I presume?), given the thumbs up to do so.
Aside from Node.js 8, it seems this would need to be backported to Node.js 9 as well, which is on V8 ~6.2.414.46?
possible backport in https://github.com/nodejs/node/pull/19477
@MylesBorins I'm also working on this! Might be slightly easier for me to verify, since I have the reproduction handy? Also happy to test your version.
I've been having a similar issue with very regular segfaults in 8.10 using https://github.com/laverdet/isolated-vm, #19477 has ~completely solved the issue so far.~ EDIT: Not solved for me, still does it just seems to occasionally decide to run fine for a few minutes before getting in a segfault crash loop again. (Docker)
@benjamn great detective work as always! I just landed on this thread because this is affecting my other whacky concurrency project isolated-vm.
Regarding the node-fibers TLS hackery yeah it's dirty but it works great and really doesn't violate any promises v8 makes vis-a-vis concurrency and lockers. benjamn mentioned the same features can be implemented cleanly via threading, and I wanted to point out that the option does exist in fibers to build in such a way. Passing -DCORO_PTHREAD will build free of any TLS hacks. Generally when I want to rule out a v8 bug or a node-fibers bug I'll build with this option to narrow the scope.
@ags131 Sorry to hear it still isn't working. Without claiming these are better solutions, what happens if you try just reverting https://github.com/v8/v8/commit/ea0e1e21ecc13884302e0c77edad67659f2e68b4, or cherry-picking https://github.com/v8/v8/commit/f0acede9bb05155c25ee87e81b4b587e8a76f690 before https://github.com/v8/v8/commit/596d55adf60ccea580692206acf2f02676e569b2? If either of those approaches solves the problem for you, then we should keep digging to find a better solution.
@benjamn I've continued attempting to trace the issue on my end, it may not be related to this as I originally thought. I tried it on 8.9 and was still getting segfaults, I'm starting to suspect my issue may be specific to my setup and current database contents. I'll give those a try though, maybe one will help.
landed in 9.x in 36f664ef9a landed in 8.x in d46fafc8c9
This should come out in the next release of both
Is the patch included in the 8.11.0 release?
Is the patch included in the 8.11.0 release?
8.11.0 was a security release. It should be in 8.11.1 once it goes out.
I just tried the https://github.com/nodejs/node/commit/d46fafc8c990899b4890dee2d6d8079c1308051f with 8.11.1. It works so far. One question though. There is no file called test-locker.cc. Is that an additional file somewhere?
thanks and cheers
@himbeere Are you referring to the (plural) test-lockers.cc
, which is located in deps/v8/test/cctest/test-lockers.cc
? (It should be there!)
The patch is there, yes. But which file should I patch? It's note included in the source as far as I can see.
pages /usr/src # cd nodejs/
pages /usr/src/nodejs # wget https://nodejs.org/dist/v8.11.1/node-v8.11.1.tar.gz
--2018-04-05 09:34:38-- https://nodejs.org/dist/v8.11.1/node-v8.11.1.tar.gz
Resolving nodejs.org... 104.20.22.46, 104.20.23.46, 2400:cb00:2048:1::6814:162e, ...
Connecting to nodejs.org|104.20.22.46|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 31030225 (30M) [application/gzip]
Saving to: ‘node-v8.11.1.tar.gz’
node-v8.11.1.tar.gz 100%[=========================================================================>] 29.59M 8.54MB/s in 3.9s
2018-04-05 09:34:42 (7.53 MB/s) - ‘node-v8.11.1.tar.gz’ saved [31030225/31030225]
pages /usr/src/nodejs # tar xvzf node-v8.11.1.tar.gz
node-v8.11.1/
node-v8.11.1/.nycrc
node-v8.11.1/.remarkrc
node-v8.11.1/android-configure
...
...
node-v8.11.1/benchmark/arrays/zero-float.js
node-v8.11.1/benchmark/arrays/zero-int.js
pages /usr/src/nodejs # find . -name test-lockers.cc
pages /usr/src/nodejs #
thanks and cheers
In the node repo run the following command
curl -L https://github.com/nodejs/node/commit/d46fafc8c990899b4890dee2d6d8079c1308051f.patch | git am -3
Also worth noting that this patch has not yet gonna wait in an 8.x release 8.11.1 was a patch release due to infra related issues. Another version of 8.x will be coming in just over 2 weeks
@MylesBorins good to know. I understand the meteor guys are trying to fix the issue themself for now.
@himbeere Correct, we (as in, Meteor, myself and @benjamn included) are trying to mitigate the situation which arose after 8.11.1 was announced. Of course, since it's a security update, many of our users are updating to it in the name of security (makes sense!). Unfortunately, it includes the breaking change outlined in this issue which causes segmentation faults for many of our users.
We're currently deploying a fix to our own hosting platform, Galaxy, by using our own custom build of Node 8.11.1 with the https://github.com/nodejs/node/commit/d46fafc8c990899b4890dee2d6d8079c1308051f commit applied, but that leaves non-Galaxy users needing to make the decision (and often unbeknownst to them, until they run into either problem) of whether to stay on 8.11.0 or tolerate the segmentation faults.
We're anxiously awaiting the 8.11.2(?) which should solve this!
News about the 8.11.2 release date?
@rodrigok we are aiming to get an r.c. out tomorrow with a release date of the 24th
/cc @gibfahn
Hello Devs.
Since nodejs 8.10 Rocketchat (https://rocket.chat/) crashes with segmentation fault. Compiled nodejs with debug and got this:
And:
Any idea how to fix this?
Here is the rocketchat issue: https://github.com/RocketChat/Rocket.Chat/issues/10060
thanks and cheers