Open daicoden opened 6 years ago
I agree, it should ideally do this. In practice though, it's not feasible to have the bazel plugin set all the (possibly required) environment variables.
To integrate with pydevd, we specify a non-hermetic, barely supported python launcher when invoking 'bazel build'. This generates a script which doesn't set the normal environment variables (such as this one).The only practical alternative is to set up remote debugging, which would require the user to make source code changes whenever they wanted to debug.
Hmm, well is there anyway to load fixtures without these environment variables?
Here's a thought, I'd be up for modifying the build file when debugging... and since we're in non hermetic land...
# When debug time, prepend this to the BUILD file you're working on
external_repository("@io_intellij_bazel", "py_test")
py_test is a macro, calls native.py_test, and generates another binary_test rule, (name-intellij-env).
Shell file:
dir = $INTELLIJ_ROOT # set by intellij before running test
echo `env` > $dir/.intellij-env-variables
The bazel plugin then calls this test before the real test, and uses that to set environment variables for the real test.
But something like that to shim in the variables set by Bazel doesn't sound too bad... :-D
We can set the environment variables without a problem. The problems are:
It's the maintenance burden that I'm primarily worried about. We're currently looking into alternative methods for debugging, which might allow us to debug tests run through bazel directly (in which case bazel would set the required environment variables).
Generalizing this issue's title cover more than just Python, since the same issue was reported for C++ in #256.
TEST_WORKSPACE
is another that I wish were set to allow usage of bazel.RunfilesPath()
in go tests (breaks when we run with debugger due to this not being set).
Thank you for contributing to the IntelliJ repository! This issue has been marked as stale since it has not had any activity in the last 6 months. It will be closed in the next 14 days unless any other activity occurs or one of the following labels is added: "not stale", "awaiting-maintainer". Please reach out to the triage team (@bazelbuild/triage
) if you think this issue is still relevant or you are interested in getting the issue resolved.
Thank you for contributing to the IntelliJ repository! This issue has been marked as stale since it has not had any activity in the last 6 months. It will be closed in the next 14 days unless any other activity occurs or one of the following labels is added: "not stale", "awaiting-maintainer". Please reach out to the triage team (@bazelbuild/triage
) if you think this issue is still relevant or you are interested in getting the issue resolved.
This issue has been automatically closed due to inactivity. If you're still interested in pursuing this, please reach out to the triage team (@bazelbuild/triage
). Thanks!
I've recently created a wrapper for Bazel that executes tests outside of "bazel test" precisely so that I can control how to run the debugger.
I had to sort out how to deal with the environment variables as well, and what I ended up doing is extract the information from Bazel so I don't risk the wrapper getting out of sync. To do that, I first execute the test with --run_under
pointing Bazel to a script that does this:
writer.write("\n".join([
"#!/bin/sh",
f"pwd >{runfiles_txt}",
f"env -0 >{env_txt}",
f"echo \"${{@}}\" >{cmdline_txt}",
]))
And then the wrapper processes the extracted information to reproduce the execution environment for the debugger.
I suppose the plugging could do something like this if remote debugging is disabled? The maintenance burden of this should be low (at the expense of an extra delay to run the tests, but I've seen the extra bazel test --run_under
invocation is fast enough).
I've recently created a wrapper for Bazel that executes tests outside of "bazel test" precisely so that I can control how to run the debugger.
I had to sort out how to deal with the environment variables as well, and what I ended up doing is extract the information from Bazel so I don't risk the wrapper getting out of sync. To do that, I first execute the test with
--run_under
pointing Bazel to a script that does this:writer.write("\n".join([ "#!/bin/sh", f"pwd >{runfiles_txt}", f"env -0 >{env_txt}", f"echo \"${{@}}\" >{cmdline_txt}", ]))
And then the wrapper processes the extracted information to reproduce the execution environment for the debugger.
I suppose the plugging could do something like this if remote debugging is disabled? The maintenance burden of this should be low (at the expense of an extra delay to run the tests, but I've seen the extra
bazel test --run_under
invocation is fast enough).
I've implemented this approach in my branch https://github.com/bazelbuild/intellij/compare/v2024.08.13-stable...sfc-gh-mgalindo:intellij:mgalindo-clwb-gdb?expand=1 I still need to guard it around a experiment property before I upstream it, but would this be an acceptable solution?
@agluszak @jastice can you please look at that?
@sfc-gh-mgalindo sorry for the delay, I was OOO for two weeks. I can't really reproduce the issue, TEST_SRCDIR seems to be present when I debug the code
https://github.com/user-attachments/assets/3cb986b6-8ee3-4901-a9f1-87266f56306a
@tpasternak Did you disable the remote debugger feature?
Sorry, I will look into this next week
Sorry for the delay again, I can confirm the problem exists. FTR, on Linux one has to set the registry flag bazel.clwb.debug.use.gdb.server
to false.
I'm wondering why does it work correctly with the remote gdb. Still, if you have a branch for that - please open a PR
Exactly I can confirm it's broken. cc @LeFrosch @ujohnny
for loading fixtures for tests (using pytest):
When running a running a test target, this works fine, but when debugging the environment variable TEST_SRCDIR is not set.
I believe TEST_SRCDIR should always be set for tests according to: https://docs.bazel.build/versions/master/test-encyclopedia.html