Closed hdufs closed 6 years ago
@hdufs Do you fix your problem ? cuz i got same problem
@MarucosChou sorry ,i don't fix it
@JasonAtNvidia
What version of TF are you trying to build? Google has broken building easily on TX2 since switching to 1.8.0rc0 and it hasn't worked in a few weeks. The easiest way right now is to use the pre-built binaries I have linked. If you need to build master then you'll have to look into this bug thread for a patch (https://github.com/tensorflow/tensorflow/issues/18643) I haven't had time to sit and look at fixing my build script yet, or to see if Google has fixed it yet.
Did you flash your TX2 with JetPack32?
@JasonAtNvidia even am having the same problem but on TX1 i flashed my TX1 with Jetpack3.2 and am trying to build TF r1.6 and am using the prebuilt binaries that are linked in your account. still getting the problem and am very new with it so if you could suggest something it will be very helpful
TX1 are not easy. You need to put in an external memory card or USB stick and build on that. However, the wheels I have linked should work. They were built against CUDA Compute Capability 5.3, and that is specifically for the TX1. Can you paste the error you get with the wheel?
hello i am sorry i was mistaken i wasn't using the wheels provided by you earlier as i told you in my previous comment i am newbie in all this so made a mistake. now i am building with the wheels provided but i don't know how to do build externally so just trying it with simple command directly on board and if i get error i'll approach you once again but really big thanks for your help till now.
Edit :- Its installed without any errors thanks a lot Thanks
@JasonAtNvidia i got this when i run with wheel
Exception: Traceback (most recent call last): File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connection.py", line 137, in _new_conn (self.host, self.port), self.timeout, **extra_kw) File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/connection.py", line 91, in create_connection raise err File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/connection.py", line 81, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 560, in urlopen body=body, headers=headers) File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 346, in _make_request self._validate_conn(conn) File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 787, in _validate_conn conn.connect() File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connection.py", line 217, in connect conn = self._new_conn() File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connection.py", line 146, in _new_conn self, "Failed to establish a new connection: %s" % e) requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fa55665c0>: Failed to establish a new connection: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 328, in run wb.build(autobuilding=True) File "/usr/lib/python3/dist-packages/pip/wheel.py", line 748, in build self.requirement_set.prepare_files(self.finder) File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 360, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 512, in _prepare_file finder, self.upgrade, require_hashes) File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 273, in populate_link self.link = finder.find_requirement(self, upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 442, in find_requirement all_candidates = self.find_all_candidates(req.name) File "/usr/lib/python3/dist-packages/pip/index.py", line 400, in find_all_candidates for page in self._get_pages(url_locations, project_name): File "/usr/lib/python3/dist-packages/pip/index.py", line 545, in _get_pages page = self._get_page(location) File "/usr/lib/python3/dist-packages/pip/index.py", line 648, in _get_page return HTMLPage.get_page(link, session=self.session) File "/usr/lib/python3/dist-packages/pip/index.py", line 757, in get_page "Cache-Control": "max-age=600", File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 480, in get return self.request('GET', url, *kwargs) File "/usr/lib/python3/dist-packages/pip/download.py", line 378, in request return super(PipSession, self).request(method, url, args, kwargs) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 468, in request resp = self.send(prep, send_kwargs) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 576, in send r = adapter.send(request, kwargs) File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-any.whl/cachecontrol/adapter.py", line 46, in send resp = super(CacheControlAdapter, self).send(request, kw) File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/adapters.py", line 376, in send timeout=timeout File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 610, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 228, in increment total -= 1 TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
@JasonAtNvidia I have similar issue on a TX2 flashed hours ago with JetPack 3.2 :
Already on 'r1.6' Your branch is up-to-date with 'origin/r1.6'. PYTHON_BIN_PATH=/usr/bin/python GCC_HOST_COMPILER_PATH=/usr/bin/gcc CUDA_TOOLKIT_PATH=/usr/local/cuda TF_CUDA_VERSION=9.0 TF_CUDA_COMPUTE_CAPABILITIES=5.3,6.2 CUDNN_INSTALL_PATH=/usr/lib/aarch64-linux-gnu TF_CUDNN_VERSION=7.0.5 WARNING: Running Bazel server needs to be killed, because the startup options are different. You have bazel 0.11.1- (@non-git) installed. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=tensorrt # Build with TensorRT support. Configuration finished swapon: /home/nvidia/JetsonTFBuild/TensorFlow_Install/swapfile.swap: swapon failed: Device or resource busy Looks like Swap not desired or is already in use .................................. ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last): File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 163 auto_configure_fail("TensorRT library (libnvinfer) v...") File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl", line 152, in auto_configure_fail fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set. WARNING: Target pattern parsing failed. ERROR: error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last): File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 163 auto_configure_fail("TensorRT library (libnvinfer) v...") File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl", line 152, in auto_configure_fail fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set. INFO: Elapsed time: 1.881s FAILED: Build did NOT complete successfully (0 packages loaded) currently loading: tensorflow/tools/pip_package
@valentindbdg I have the same question with you. It seems that you use this command:
sudo bash BuildTensorFlow.sh -b r1.6
As you gedit the helperscript with TF_NEED_TENSORRT = 1 AND use this command:
sudo bash BuildTensorFlow.sh
This error will not be prompted while a new error:
unrecognized command line option '-mfpu = neon'
What to do with this problem? @JasonAtNvidia
Hey everybody. I have been travelling the last couple weeks. I'll spend some time this weekend trying to bugfix this script. It is becoming difficult to be backwards compatible with many version of TF and keep this script small and readable. Google dropped ARM compatibility with r1.8 by including optimized neon code in the png functionality. I have filed a bug with TF about this issue and have been discussing Jetson tests through some back channels at Google. There has been a bugfix for this that I will attempt to include in the helper script. This will also break some backwards compatibility. Hopefully I have some successful builds this weekend and can include some new pre-built binaries with the next revision. Thank you for your patience.
Awesome! Thank you very much in advance.
@timoonboru I'm going to file a bug in TF. The '-mpfu=neon' is an option hardcoded somewhere in their script. I'll dig more into it, but the fix is going to be in TF itself and not in my build script.
Today I uploaded a git patch to take care of the 2 files that needed to be fixed in order to build successfully on the Jetson. I also posted 2 new links to wheel binaries so you won't have the need to build TF 1.8.0 yourself.
ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last):
File "/home/kecai/git/ml/DL/TF/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 164
auto_configure_fail("TensorRT library (libnvinfer) v...")
File "/home/kecai/git/ml/DL/TF/tensorflow/third_party/gpus/cuda_configure.bzl", line 342, in auto_configure_fail
fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set.
WARNING: Target pattern parsing failed.
ERROR: error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last):
File "/home/kecai/git/ml/DL/TF/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 164
auto_configure_fail("TensorRT library (libnvinfer) v...")
File "/home/kecai/git/ml/DL/TF/tensorflow/third_party/gpus/cuda_configure.bzl", line 342, in auto_configure_fail
fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set.
INFO: Elapsed time: 0.262s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: tensorflow/tools/pip_package
This issue seems still exists, I'm building TF in master branch, without using GPU, any idea?
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
Hi, i am new on jeton tx2,today,i want install tensorflow on my jetson tx2 ,and i have already change "PYTHON_BIN_PATH=$(which python)" to"PYTHON_BIN_PATH=$(which python3)" ,when i run ./BuildTensorFlow.sh ,however ,some errors show below:
TF_CUDA_COMPUTE_CAPABILITIES=5.3,6.2 CUDNN_INSTALL_PATH=/usr/lib/aarch64-linux-gnu TF_CUDNN_VERSION=7.0.5 You have bazel 0.11.1- (@non-git) installed. Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. Configuration finished ................................. ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last): File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 164 auto_configure_fail("TensorRT library (libnvinfer) v...") File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl", line 210, in auto_configure_fail fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set. WARNING: Target pattern parsing failed. ERROR: error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'build_defs.bzl': no such package '@local_config_tensorrt//': Traceback (most recent call last): File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/tensorrt/tensorrt_configure.bzl", line 164 auto_configure_fail("TensorRT library (libnvinfer) v...") File "/home/nvidia/JetsonTFBuild/TensorFlow_Install/tensorflow/third_party/gpus/cuda_configure.bzl", line 210, in auto_configure_fail fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: TensorRT library (libnvinfer) version is not set. INFO: Elapsed time: 1.768s FAILED: Build did NOT complete successfully (0 packages loaded) currently loading: tensorflow/tools/pip_package
Is there anybody have the same errors? Sincerely hope there is somebody can help me ,Thanks