Open qzmfranklin opened 1 year ago
You can disable the error
python_register_toolchains(
name = ...,
python_version = ...,
ignore_root_user_error = True,
)
It is good there is an option to choose but you are choosing between two evils: either you get the pyc cache misses or you have to go out of your way to support non-root builds (e.g. when building in a docker container).
We have opted for the second evil, but it creates so much extra work.
I really wish there was a way to solve the cache issue without this extra requirement.
we hit the same issues - my feeling is that this is just not the responsibility of rule_python
to enforce this
To be fair, it's probably the CPython to blame, as they programmed to produce the .pyc
files on the fly.
Not sure if they are willing to make a flag to NOT generate .pyc
files on the stdlib
on the fly just for this use case, though.
You can disable the error
python_register_toolchains( name = ..., python_version = ..., ignore_root_user_error = True, )
Works for me!
When it is not feasible to run as non-root, currently one needs to modify the build files of the project. This is bit clumsy. Could the ignore_root_user_error
option be exposed to the user who runs the build of some random project "xyz", which happens to use rules_python
internally? What would be the standard approach to do so?
You can disable the error
python_register_toolchains( name = ..., python_version = ..., ignore_root_user_error = True, )
Works for me!
@daixiang0 / @nsubiron Could you please let me know how and where did u make this edit?
^ I think this workaround doesn't suffice for build tools seeking to use rules_python in their implementations, since they aren't the root module. More context over in https://github.com/hedronvision/bazel-compile-commands-extractor/issues/166
It'd be quite valuable to have this just work out of the box. That is, if running as root, fall back to the best behavior you can rather than erroring.
Could this setting at least be overridden with an environment variable? I have a rules_$company
module that is used by maybe a dozen separate repos that establishes uniform toolchains and CI scripts among other things. Since the toolchain can only be set by the root module, I have no way of overriding this other than separately tracking the toolchain in each repo.
Could this setting at least be overridden with an environment variable?
Yes, I think thats fine. Send a PR?
re: the issue in general: Unfortunately, I don't see any great solutions to this problem.
Fall back to best behavior you can
I am somewhat inclined to agree. The reason being: the problem a writable install causes is spurious rebuilds. When it's an error to have a writable install, and people have to opt out, then they're just going to opt out, because that going to mostly work, rather than entirely not work. And they just have to live with spurious rebuilds.
I don't see any interpreter options or environment variable to use hashed-based pycs instead of timestamps. Which is too bad; it would have been a nice fix to this. Nor do I see any options to control where pycs are read-from/written-to for just for the stdlib.
When a runtime is initially downloaded and extracted, we could run a compile step to pre-populate the pyc files. However, this only works if the downloaded runtime runs on the host. If we could somehow ensure that there was a host-runnable python of the same version available, then that could be used to perform the compilation; or if we had a prebuilt tool to perform this. This does seem like one of the better options, though. If the host can't run the Python, then we just move along.
Alternatively, the pyc could be generated at build time. We can do this for regular libraries now; I don't see why we couldn't also do it for the py files coming from the runtime. The downside is it'll add some build overhead, since there's a few thousand files to compile.
Maybe if the stdlib was in a zip file we'd have more options? This idea was floated as a way to reduce the number of files in the runtime, too. I'm not sure where Python will write pyc files if it reads a file from a zip.
It'd be most ideal if the upstream python-standalone packages came with the pyc files already. That would make this go away entirely.
@rickeylev Maybe I'm missing sth, but there are several ways to deal with pyc generation without having to mark directories as read-only:
β docker run --rm -it python:3.11 bash
root@8c07a07f5535:/# find /usr -name '*.pyc' | wc -l
1031
root@8c07a07f5535:/# find /usr -name '__pycache__' -exec rm -rf {} +
root@8c07a07f5535:/# find /usr -name '*.pyc' | wc -l
0
root@8c07a07f5535:/# PYTHONPYCACHEPREFIX=/tmp/pycs python3 -c 'import sys; print(sys.version)'
3.11.3 (main, May 23 2023, 13:25:46) [GCC 10.2.1 20210110]
root@8c07a07f5535:/# find /usr -name '*.pyc' | wc -l
0
root@8c07a07f5535:/# find /tmp/pycs -name '*.pyc' | wc -l
4
root@8c07a07f5535:/# PYTHONDONTWRITEBYTECODE=True python3 -c 'import sys; print(sys.version)'
3.11.3 (main, May 23 2023, 13:25:46) [GCC 10.2.1 20210110]
root@8c07a07f5535:/# find /usr -name '*.pyc' | wc -l
0
root@8c07a07f5535:/# python3 -c 'import sys; print(sys.version)'
3.11.3 (main, May 23 2023, 13:25:46) [GCC 10.2.1 20210110]
root@8c07a07f5535:/# find /usr -name '*.pyc' | wc -l
4
E.g. the pyc files could be written to a scratch directory, or you can tell the python interpreter to use existing, but not writing new pyc files.
Unfortunately, environment variables aren't a good option because they get inherited by sub-processes (we've had bug reports due to using env vars and are trying to avoid them). They're also somewhat of a pain because, in the bootstrap, we have to have special logic to try and respect if the caller sets the environment variable. -B
et al are slightly better, but then have the issue of "What if the caller sets the env var?".
sys.dont_write_bytecode
is mutable, however, setting that would happen after the interpreter starts up, so anything imported before the bootstrap can set that suffers from the issue. A smaller window, at least, I guess.
Disabling all runtime pyc generation is also a heavy hammer -- if someone wants to set the pyc cache prefix, then why not? Considering how Bazel works, that's not a bad idea to get pyc files with less overhead. Also, this problem only occurs when Python is trying to create pyc files in the backing repository directory instead of the runfiles directory. I think that behavior is somewhat specific to the interpreter itself; IIRC it jumps through some extra hoops to resolve symlinks and find the "real" location of the interpreter.
Also -- I'm +1 on making the "setting read only logic" optimistic in nature. If it works, great, if not, oh well (the only thing the user can do is disable it entirely -- same net effect). I'd approve a PR doing that.
The main issue I see with the current approach, is that its an all-or-nothing approach. E.g. why does ignore_root_user_error
completely skip the "mark read only" step (assuming that step is the best thing available right now in the first place), instead of matching its name better (just ignore the error, maybe turn it into a warning).
Right now, if you want to use rules_python
in many CI systems, you have to either create a non-root user upfront in the container or use ignore_root_user_error
. While the latter might work well enough in the CI context (lets say for PR tests), it also silently disables the "protection" on developer systems.
As ignore_root_user_error
is a module setting, it can't be influenced by a select()
either, so you can't choose depending on the platform you're on.
Also, being on windows or enabling ignore_root_user_error
adds pyc
files to an exclude list. Apparently that was considered as acceptable enough on windows to not show a warning/error, so maybe we could make that the default and be more clear about potential caching problems? Right now there is a lot of hidden things happening.
I'd suggest to have the following approach:
on_windows = "windows" in platform
if (on_windows or is_root) and not ctx.hide_toolchain_cache_warning:
show_warning("If you experience toolchain caching issues, please read this: ...")
glob_exclude = [
# These pycache files are created on first use of the associated python files.
# Exclude them from the glob because otherwise between the first time and second time a python toolchain is used,"
# the definition of this filegroup will change, and depending rules will get invalidated."
# See https://github.com/bazelbuild/rules_python/issues/1008 for unconditionally adding these to toolchains so we can stop ignoring them."
"**/__pycache__/*.pyc",
"**/__pycache__/*.pyo",
]
or if you want to keep the read-only bit where possible
on_windows = "windows" in platform
if (on_windows or is_root) and not ctx.hide_toolchain_cache_warning:
show_warning("If you experience toolchain caching issues, please read this: ...")
if not on_windows:
# Mark library as read-only as defense-in-depth mechanism if possible, to prevent the creation of dynamic files.
# Will be a no-op if the user has CAP_DAC_OVERRIDE (like root), as they can bypass file ACLs
lib_dir = "lib"
repo_utils.execute_checked(
rctx,
op = "python_repository.MakeReadOnly",
arguments = [repo_utils.which_checked(rctx, "chmod"), "-R", "ugo-w", lib_dir],
logger = logger,
)
glob_exclude = [
# These pycache files are created on first use of the associated python files.
# Exclude them from the glob because otherwise between the first time and second time a python toolchain is used,"
# the definition of this filegroup will change, and depending rules will get invalidated."
# See https://github.com/bazelbuild/rules_python/issues/1008 for unconditionally adding these to toolchains so we can stop ignoring them."
"**/__pycache__/*.pyc",
"**/__pycache__/*.pyo",
]
(disclaimer: bazel still doesn't support warnings yet, but you get the idea)
find the "real" location of the interpreter.
@rickeylev, I think this could also be due to the fact that in the repository_rule
loading phase we are using the said interpreter to do things and that will inevitably create pyc files in the interpreter repository. If we get rid of Python usage in the whl_library
repository rule, I think we could potentially eliminate that source of pyc generation.
That said, I am assuming that this is relevant only if the find the "real" location of the interpreter
is due to our code (we do have code that is resolving the symlinks to the real location of the interpreter).
repository rule phase is using the interpreter
That does sound plausible. Setting the env vars to inhibit PYC creation might help for those invocations.
π bug report
Affected Rule
The issue is caused by the rule: During the loading phase. ### Is this a regression? I don't think so. ### Description Per #713 , `rules_python` cannot be used by `root`. CI systems such as CircleCI and BuildBuddy uses `root` user by default. Setting up non-root in those systems aren't very straightforward. Also, it does not look like a responsibility of an ordinary user of rules_python to have to do this. ## π¬ Minimal ReproductionExample build event:
https://app.buildbuddy.io/invocation/55238174-54c3-459c-8d80-72722f2d00f9
π₯ Exception or Error
Relevant error message:
π Your Environment
Operating System:
Output of
bazel version
:Rules_python version:
Anything else relevant?