Closed ychnh closed 4 years ago
TensorFlow 2.0.0 is now stable and this is still an issue since some modules like tensorflow.keras
are lazily imported: https://github.com/tensorflow/tensorflow/issues/31973
This also caused problems with many other IDEs. @davidhalter Do you think this can be fixed in Jedi or would we need changes in TensorFlow itself?
How does your import code look like? I have no idea why it doesn't work and I don't really have time to understand the tensorflow codebase.
Thanks for taking a look, I am also not sure why it doesn't work.
I think is due to TensorFlow switching to lazily import submodules which break autocompletions in user-space using tensorflow.keras
. The tensorflow/__init__.py
looks like this:
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# LINT.IfChange
"""TensorFlow root package"""
from __future__ import absolute_import as _absolute_import
from __future__ import division as _division
from __future__ import print_function as _print_function
import sys as _sys
import importlib as _importlib
import types as _types
# Since TensorFlow Python code now resides in tensorflow_core but TensorFlow
# ecosystem code (e.g. estimator, but also even tensorflow) imports tensorflow
# we need to do forwarding between the two. To do so, we use a lazy loader to
# load and forward the top level modules. We cannot use the LazyLoader defined
# by tensorflow at tensorflow/python/util/lazy_loader.py as to use that we would
# already need to import tensorflow. Hence, we define it inline.
class _LazyLoader(_types.ModuleType):
"""Lazily import a module so that we can forward it."""
# The lint error here is incorrect.
def __init__(self, local_name, parent_module_globals, name): # pylint: disable=super-on-old-class
self._local_name = local_name
self._parent_module_globals = parent_module_globals
super(_LazyLoader, self).__init__(name)
def _load(self):
"""Import the target module and insert it into the parent's namespace."""
module = _importlib.import_module(self.__name__)
self._parent_module_globals[self._local_name] = module
self.__dict__.update(module.__dict__)
return module
def __getattr__(self, item):
module = self._load()
return getattr(module, item)
def __dir__(self):
module = self._load()
return dir(module)
# Forwarding a module is as simple as lazy loading the module from the new path
# and then registering it to sys.modules using the old path
def _forward_module(old_name):
parts = old_name.split(".")
parts[0] = parts[0] + "_core"
local_name = parts[-1]
existing_name = ".".join(parts)
_module = _LazyLoader(local_name, globals(), existing_name)
return _sys.modules.setdefault(old_name, _module)
# This list should contain all modules _immediately_ under tensorflow
_top_level_modules = [
"tensorflow._api",
"tensorflow.python",
"tensorflow.tools",
"tensorflow.core",
"tensorflow.compiler",
"tensorflow.lite",
"tensorflow.keras",
"tensorflow.compat",
"tensorflow.summary", # tensorboard
"tensorflow.examples",
]
# Estimator needs to be handled separatedly so we can still allow both
# import tensorflow_estimator and import tensorflow.estimator work
# Only in the second case do we actually need to do forwarding, the first case
# already defines most of the hierarchy and eagerly forwarding would result in
# an import loop.
if "tensorflow_estimator" not in _sys.modules:
_root_estimator = False
_top_level_modules.append("tensorflow.estimator")
else:
_root_estimator = True
# Lazy load all of the _top_level_modules, we don't need their names anymore
for _m in _top_level_modules:
_forward_module(_m)
# We still need all the names that are toplevel on tensorflow_core
from tensorflow_core import *
# These should not be visible in the main tf module.
try:
del core
except NameError:
pass
try:
del python
except NameError:
pass
try:
del compiler
except NameError:
pass
try:
del tools
except NameError:
pass
try:
del examples
except NameError:
pass
# Manually patch keras and estimator so tf.keras and tf.estimator work
keras = _sys.modules["tensorflow.keras"]
if not _root_estimator: estimator = _sys.modules["tensorflow.estimator"]
# Also import module aliases
try:
from tensorflow_core import losses, metrics, initializers, optimizers
except ImportError:
pass
# LINT.ThenChange(//tensorflow/virtual_root_template_v1.__init__.py.oss)
Also VSCode and PyCharm have problems with this change, though I am not sure what the right way to resolve this would be: https://github.com/microsoft/python-language-server/issues/818
This would be easily fixable with a small plugin addition (in jedi/plugins
). However lets wait for a bit and see what happens, maybe the tensorflow code also shifts if all editors struggle with it.
@lgeiger Can you point me to the place where the code is defined that you mentioned? I would be especially interested in how this behaves with latest tensorflow master.
Because to me it looks like the problematic code of yours doesn't exist anymore on tensorflow master.
Can you point me to the place where the code is defined that you mentioned? I would be especially interested in how this behaves with latest tensorflow master.
This is the __init__.py
that TensorFlow installs on the users system. I just checked with tf-nightly
and the issue is still there.
Because to me it looks like the problematic code of yours doesn't exist anymore on tensorflow master.
The code is a bit hidden since TensorFlow uses bazel
to build the pip
package. It is located here.
I am having the same issue. jedi-vim doesn't autocomplete anything from the keras module.
Autocomplete will work in VSCode if I write: from tensorflow import keras as k
However this doesn't ameliorate the issue in vim with jedi-vim.
This would be easily fixable with a small plugin addition (in
jedi/plugins
). However lets wait for a bit and see what happens, maybe the tensorflow code also shifts if all editors struggle with it.
@davidhalter It looks like there are no plans to change this on the TensorFlow side. Affected versions are 1.15.0, 2.0.0 and the upcoming 2.1.0 release. I am not part of the TensorFlow team, so I can't say for certain, but I think a jedi plugin sounds like a sensible idea. I have no experience with contributing to jedi, but is there anything I could help with?
I actually think this is a very bad idea. It's a bit complicated to write those plugins. IMO it would be nicer to fix this issue on TensorFlow. It's probably very possible to fix it. One way to fix this would be to just replace keras = _sys.modules["tensorflow.keras"]
with from tensorflow import keras
. This does essentially the same thing.
It's also nice to use something like:
import typing
if not typing.TYPE_CHECKING:
keras = _sys.modules["tensorflow.keras"]
This only works in Python 3 (typing needs to be installed).
I actually think this is a very bad idea. It's a bit complicated to write those plugins. IMO it would be nicer to fix this issue on TensorFlow. It's probably very possible to fix it. One way to fix this would be to just replace
keras = _sys.modules["tensorflow.keras"]
withfrom tensorflow import keras
. This does essentially the same thing.It's also nice to use something like:
import typing if not typing.TYPE_CHECKING: keras = _sys.modules["tensorflow.keras"]
This only works in Python 3 (typing needs to be installed).
This fix worked for me
Since this was fixed upstream, I'm closing.
I think this is not fixed. Autocompletion beyond first/second level in tensorflow 2.0 does not work (tested with vim-jedi and youcompleteme).
For instance
tf.data.<autocomplete> -> tf.data.Dataset
works, whereas
tf.data.Dataset.<autocomplete>
doesn't give any option. The autocompletion within jupyter notebooks or interactive python session works though. So there must be a way to get this working within jedi.
If you give me instructions how I could debug the problem in jedi I will be glad to contribute.
I think you could try adding TensorFlow to jedi.settings.auto_import_modules
, which might help.
@lgeiger that worked. Thank you very much!
For everyone wondering how to do that in YouCompleteMe:
1) Go to your YouCompleteMe
jedi install, e.g. to $HOME/.vim/bundle/YouCompleteMe/third_party/ycmd/third_party/jedi_deps/jedi/jedi
2) open settings.py
3) Change
auto_import_modules = [
'gi', # This third-party repository (GTK stuff) doesn't really work with jedi
]
to
auto_import_modules = [
'gi', # This third-party repository (GTK stuff) doesn't really work with jedi
'tensorflow',
]
4) In vim (only once) do: :YcmRestartServer
For jedi-vim the corresponding path is $HOME/.vim/bundle/jedi-vim/pythonx/jedi/jedi
@lgeiger Didn't you solve this one in https://github.com/tensorflow/tensorflow/pull/34629? Does @image357 maybe have an older version?
I'm using latest tensorflow 2.1.0 installed with pip:
pip3 install tensorflow --upgrade
python3
import tensorflow as tf
tf.__version__
'2.1.0'
I'm using the latest (master branch) YouCompleteMe with jedi version 0.15.0 (I have no control over that, since it's fixed by the repo's submodule and I don't want to mess with the dependencies). I also tested with the latest (master branch) jedi-vim.
When did you exactly test? jedi-vim just received support for 0.16.0 like 23 hours ago :) https://github.com/davidhalter/jedi-vim/pull/991
@davidhalter I just checked out the latest (master branch) jedi-vim and did the test again. It only works when I add 'tensorflow'
to auto_import_modules
as described above. By working I mean stuff like
tf.data.Dataset.<autocomplete>
First/second level autocompletion seems to work just fine without changes to auto_import_modules
.
It works for me with tensorflow==2.0.0b1
. Unfortunately I cannot test the latest versions on Ubuntu 16.04:
pip install tensorflow==2.1.0
Collecting tensorflow==2.1.0
Could not find a version that satisfies the requirement tensorflow==2.1.0 (from versions: 0.12.1, 1.0.0, 1.0.1, 1.1.0rc0, 1.1.0rc1, 1.1.0rc2, 1.1.0, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0, 1.4.0rc1, 1.4.0, 1.4.1, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.8.0rc0, 1.8.0rc1, 1.8.0, 1.9.0rc0, 1.9.0rc1, 1.9.0rc2, 1.9.0, 1.10.0rc0, 1.10.0rc1, 1.10.0, 1.10.1, 1.11.0rc0, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.12.0rc0, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.2, 1.12.3, 1.13.0rc0, 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 2.0.0a0, 2.0.0b0, 2.0.0b1)
No matching distribution found for tensorflow==2.1.0
It's also strange, because tensorflow-2.1.0-cp36-cp36m-manylinux2010_x86_64.whl
looks like a wheel I should be able to use...
I can confirm this behavior. It works for tensorflow==2.0.0b1
without any changes to auto_import_modules
. I've also tested with tensorflow==2.0.0
where it is broken for both the cpu and gpu version (just to be sure because version 2.1.0 is now by default cpu+gpu).
I guess this is a tensorflow issue after all. @lgeiger I just saw the PR is still open. So the fix is not supposed to be in 2.1.0? Or should I create a new issue for tensorflow?
I finally got 2.1.0 installed. It seems that tf.data.
seems to work for me, it returns Dataset
, DatasetSpec
, experimental
and a few other completions.
I don't think we should create an issue in their tracker, since it seems to work for me.
Yes, tf.data.<autocomplete>
works for me, too. As I've said: first/second level autocompletion works as expected. However, tf.data.Dataset.<autocomplete>
is broken and doesn't give for instance tf.data.Dataset.from_generator
.
Sorry for the delay. I wrote about this in #1486, so this is a copy, but it's essentially the same issue, just that Jedi cannot resolve from tensorflow.python.data.ops.dataset_ops import DatasetV2 as Dataset
.
Just checked again why this is happening: Tensorflow 2.1.0 uses a directory that pretty much contains nothing (especially no keras folder). It then modifies the sys modules (see code below). Sys module modifications are not understood by Jedi. And if you read the code below, you can probably understand why. So if don't undo this change, I guess they should just write stubs.
For now I'm not going to do anything. This is an issue with tensorflow code IMO. Most of the time these kind of structures are pretty stupid and don't really provide a benefit.
def _forward_module(old_name):
parts = old_name.split(".")
parts[0] = parts[0] + "_core"
local_name = parts[-1]
existing_name = ".".join(parts)
_module = _LazyLoader(local_name, globals(), existing_name)
return _sys.modules.setdefault(old_name, _module)
Thank you very much for looking into this! I agree that this should be fixed on the tensorflow side. Do you already have or know an open issue where we could report this? If not I would open one.
Do you already have or know an open issue where we could report this? If not I would open one.
There has been quite a lot of discussion about this in https://github.com/tensorflow/community/pull/182 it looks like this might have been fixed in the latest release candidate of TensorFlow 2.2.
I can confirm, that it's fixed for me with tensorflow==2.2.0-rc3
.
I am using tensorflow 2.3.0 , yet the issue is not fixed for me
I am using this small work around.. In my case i needed to use the compat module of tensorflow ... so i did this
import tensorflow as tf_tmp #Initializes the library in the memory
tf = tf_tmp.compat.v1
This is a work around for the lazy loading issues of tensorflow with jedi
@Chidhambararajan: for a temporary workaround you could try my previous solution: https://github.com/davidhalter/jedi/issues/1391#issuecomment-581400073
Hi There,
Because of TF2-GPU is not support for this by now(2.8.0), I find one way to solve this problem temporarily. Just append the following code at the end tensorflow.__init__.py
:
if _typing.TYPE_CHECKING:
# import anything not work here, this will not take effect while the program is running
from .python import keras
import keras.layers
Have the latest version of jedi-vim and jedi.
The autocompletion works great for tensorflow-gpu 13.0.0 I have issues when using jedi for tensorflow-gpu 2.0.0rc0
Right now I'm just writing my code in the 13.0.0 environment so its not a big issue but it would be nice to have this fixed.