Closed hongyi-zhao closed 3 years ago
Hi,
This is Ubuntu 20.04 with apollo git master branch. I tried with the following steps:
$ bash docker/scripts/dev_start.sh $ bash docker/scripts/dev_into.sh $ bash scripts/bootstrap.sh [WARNING] No nvidia-driver found. CPU will be used. [WARNING] No nvidia-driver found. CPU will be used. [WARNING] No nvidia-driver found. CPU will be used. [WARNING] No nvidia-driver found. CPU will be used. nohup: appending output to 'nohup.out' nohup: failed to run command 'cyber_launch': No such file or directory Could not launch module monitor. Is it already built? [WARNING] No nvidia-driver found. CPU will be used. [WARNING] No nvidia-driver found. CPU will be used. nohup: appending output to 'nohup.out' nohup: failed to run command 'cyber_launch': No such file or directory Could not launch module dreamview. Is it already built?
Any hints for this problem?
Regards, HY
Try build everything before running scripts/bootstrap.sh
by running
./apollo.sh build # debug build
Or
./apollo.sh build_opt_gpu # optimized gpu build
inside docker.
Still failed. See following for more info:
$ ./apollo.sh build
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Apollo Environment Settings:
[INFO] APOLLO_ROOT_DIR: /apollo
[INFO] APOLLO_CACHE_DIR: /apollo/.cache
[INFO] APOLLO_IN_DOCKER: true
[INFO] APOLLO_VERSION: master-2020-08-22-9ac6fb67f6
[INFO] DOCKER_IMG: dev-x86_64-18.04-20200821_1308
[INFO] APOLLO_ENV: STAGE=dev USE_ESD_CAN=false USE_GPU=0
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Running build under CPU mode on x86_64 platform.
[WARNING] ESD CAN library supplied by ESD Electronics doesn't exist.
[WARNING] If you need ESD CAN, please refer to:
[WARNING] third_party/can_card_library/esd_can/README.md
[INFO] Build Overview:
[INFO] Bazel Options: --define USE_ESD_CAN=false
[INFO] Build Targets: //modules/... union //cyber/... except //modules/drivers/canbus/can_client/esd/...
Loading: 0 packages loaded
Loading: 485 packages loaded
Loading: 485 packages loaded
(23:21:07) INFO: Current date is 2020-08-23
(23:21:08) INFO: Analyzed 4686 targets (108 packages loaded, 17905 targets configured).
(23:21:08) INFO: Found 4686 targets...
(23:21:26) ERROR: /apollo/modules/perception/lidar/lib/detection/lidar_point_pillars/BUILD:74:13: error while parsing .d file: /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/execroot/apollo/bazel-out/k8-fastbuild/bin/modules/perception/lidar/lib/detection/lidar_point_pillars/_objs/anchor_mask_cuda/anchor_mask_cuda.pic.d (No such file or directory)
gcc: warning: modules/perception/lidar/lib/detection/lidar_point_pillars/anchor_mask_cuda.cu: linker input file unused because linking not done
(23:21:26) INFO: Elapsed time: 18.840s, Critical Path: 14.58s
(23:21:26) INFO: 1448 processes: 1448 local.
(23:21:26) FAILED: Build did NOT complete successfully
Then I tried the following build options and all of them failed:
$ ./apollo.sh clean
$ ./apollo.sh build_opt
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Apollo Environment Settings:
[INFO] APOLLO_ROOT_DIR: /apollo
[INFO] APOLLO_CACHE_DIR: /apollo/.cache
[INFO] APOLLO_IN_DOCKER: true
[INFO] APOLLO_VERSION: master-2020-08-22-9ac6fb67f6
[INFO] DOCKER_IMG: dev-x86_64-18.04-20200821_1308
[INFO] APOLLO_ENV: STAGE=dev USE_ESD_CAN=false USE_GPU=0
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Running build under CPU mode on x86_64 platform.
[WARNING] ESD CAN library supplied by ESD Electronics doesn't exist.
[WARNING] If you need ESD CAN, please refer to:
[WARNING] third_party/can_card_library/esd_can/README.md
[INFO] Build Overview:
[INFO] Bazel Options: --config=opt --define USE_ESD_CAN=false
[INFO] Build Targets: //modules/... union //cyber/... except //modules/drivers/canbus/can_client/esd/...
Loading: 0 packages loaded
Loading: 485 packages loaded
Loading: 485 packages loaded
(23:24:07) INFO: Current date is 2020-08-23
(23:24:08) INFO: Analyzed 4686 targets (108 packages loaded, 17905 targets configured).
(23:24:08) INFO: Found 4686 targets...
(23:24:21) INFO: From Compiling external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/weighted_target/weighted_target.cc:
external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/weighted_target/weighted_target.cc: In member function 'virtual grpc_core::RefCountedPtr<grpc_core::LoadBalancingPolicy::Config> grpc_core::{anonymous}::WeightedTargetLbFactory::ParseLoadBalancingConfig(const grpc_core::Json&, grpc_error**) const':
external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/weighted_target/weighted_target.cc:55:10: warning: 'child_config.grpc_core::{anonymous}::WeightedTargetLbConfig::ChildConfig::weight' may be used uninitialized in this function [-Wmaybe-uninitialized]
struct ChildConfig {
^~~~~~~~~~~
external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/weighted_target/weighted_target.cc:647:45: note: 'child_config.grpc_core::{anonymous}::WeightedTargetLbConfig::ChildConfig::weight' was declared here
WeightedTargetLbConfig::ChildConfig child_config;
^~~~~~~~~~~~
(23:24:26) INFO: From Compiling external/com_github_grpc_grpc/src/core/lib/surface/server.cc:
external/com_github_grpc_grpc/src/core/lib/surface/server.cc: In function 'grpc_call_error {anonymous}::queue_call_request(grpc_server*, size_t, {anonymous}::requested_call*)':
external/com_github_grpc_grpc/src/core/lib/surface/server.cc:1242:37: warning: 'rm' may be used uninitialized in this function [-Wmaybe-uninitialized]
rm->RequestCallWithPossiblePublish(cq_idx, rc);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~
(23:24:45) ERROR: /apollo/modules/perception/base/BUILD:358:11: C++ compilation of rule '//modules/perception/base:syncedmem' failed (Exit 1)
In file included from external/uuid/cublas_v2.h:65:0,
from ./modules/perception/base/common.h:21,
from ./modules/perception/base/syncedmem.h:66,
from modules/perception/base/syncedmem.cc:63:
external/uuid/cublas_api.h:72:10: fatal error: driver_types.h: No such file or directory
#include "driver_types.h"
^~~~~~~~~~~~~~~~
compilation terminated.
(23:24:45) INFO: Elapsed time: 38.603s, Critical Path: 37.03s
(23:24:45) INFO: 2262 processes: 2262 local.
(23:24:45) FAILED: Build did NOT complete successfully
$ ./apollo.sh clean
$ ./apollo.sh build_cpu
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Apollo Environment Settings:
[INFO] APOLLO_ROOT_DIR: /apollo
[INFO] APOLLO_CACHE_DIR: /apollo/.cache
[INFO] APOLLO_IN_DOCKER: true
[INFO] APOLLO_VERSION: master-2020-08-22-9ac6fb67f6
[INFO] DOCKER_IMG: dev-x86_64-18.04-20200821_1308
[INFO] APOLLO_ENV: STAGE=dev USE_ESD_CAN=false USE_GPU=0
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Running build under CPU mode on x86_64 platform.
[WARNING] ESD CAN library supplied by ESD Electronics doesn't exist.
[WARNING] If you need ESD CAN, please refer to:
[WARNING] third_party/can_card_library/esd_can/README.md
[INFO] Build Overview:
[INFO] Bazel Options: --config=cpu --define USE_ESD_CAN=false
[INFO] Build Targets: //modules/... union //cyber/... except //modules/drivers/canbus/can_client/esd/...
Loading: 0 packages loaded
Loading: 485 packages loaded
Loading: 485 packages loaded
(23:27:23) WARNING: The following configs were expanded more than once: [cpu]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
(23:27:23) INFO: Current date is 2020-08-23
(23:27:24) INFO: Analyzed 4686 targets (108 packages loaded, 17905 targets configured).
(23:27:24) INFO: Found 4686 targets...
(23:27:43) ERROR: /apollo/modules/perception/lidar/lib/detection/lidar_point_pillars/BUILD:105:13: error while parsing .d file: /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/execroot/apollo/bazel-out/k8-fastbuild/bin/modules/perception/lidar/lib/detection/lidar_point_pillars/_objs/preprocess_points_cuda/preprocess_points_cuda.pic.d (No such file or directory)
gcc: warning: modules/perception/lidar/lib/detection/lidar_point_pillars/preprocess_points_cuda.cu: linker input file unused because linking not done
(23:27:43) INFO: Elapsed time: 20.055s, Critical Path: 18.22s
(23:27:43) INFO: 1504 processes: 1504 local.
(23:27:43) FAILED: Build did NOT complete successfully
Then I tried a second time for the same build option, say, build_cpu, but observed different error reports. See following for detail info:
$ ./apollo.sh
$ ./apollo.sh build_cpu
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Apollo Environment Settings:
[INFO] APOLLO_ROOT_DIR: /apollo
[INFO] APOLLO_CACHE_DIR: /apollo/.cache
[INFO] APOLLO_IN_DOCKER: true
[INFO] APOLLO_VERSION: master-2020-08-22-9ac6fb67f6
[INFO] DOCKER_IMG: dev-x86_64-18.04-20200821_1308
[INFO] APOLLO_ENV: STAGE=dev USE_ESD_CAN=false USE_GPU=0
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[WARNING] No nvidia-driver found. CPU will be used.
[INFO] Running build under CPU mode on x86_64 platform.
[WARNING] ESD CAN library supplied by ESD Electronics doesn't exist.
[WARNING] If you need ESD CAN, please refer to:
[WARNING] third_party/can_card_library/esd_can/README.md
[INFO] Build Overview:
[INFO] Bazel Options: --config=cpu --define USE_ESD_CAN=false
[INFO] Build Targets: //modules/... union //cyber/... except //modules/drivers/canbus/can_client/esd/...
Loading: 0 packages loaded
Loading: 485 packages loaded
Loading: 485 packages loaded
(23:33:49) WARNING: The following configs were expanded more than once: [cpu]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
(23:33:49) INFO: Current date is 2020-08-23
(23:33:51) INFO: Analyzed 4686 targets (108 packages loaded, 17905 targets configured).
(23:33:51) INFO: Found 4686 targets...
(23:34:11) ERROR: /apollo/modules/perception/lidar/lib/detection/lidar_point_pillars/BUILD:74:13: error while parsing .d file: /apollo/.cache/bazel/540135163923dd7d5820f3ee4b306b32/execroot/apollo/bazel-out/k8-fastbuild/bin/modules/perception/lidar/lib/detection/lidar_point_pillars/_objs/anchor_mask_cuda/anchor_mask_cuda.pic.d (No such file or directory)
gcc: warning: modules/perception/lidar/lib/detection/lidar_point_pillars/anchor_mask_cuda.cu: linker input file unused because linking not done
(23:34:12) INFO: Elapsed time: 22.791s, Critical Path: 19.93s
(23:34:12) INFO: 1493 processes: 1493 local.
(23:34:12) FAILED: Build did NOT complete successfully
It seems that you haven't install the nvidia-driver. Do you have a GPU on your machine?
IMO, by saying GPU, we're talking about an advanced/expensive graphic card. But for my case, the graphic card is rather old and low-end as following:
$ lspci | grep -i nvidia
81:00.0 VGA compatible controller: NVIDIA Corporation G96C [GeForce 9500 GT] (rev a1)
So, I really don't know whether it can be referred as a GPU. So, I don't know whether should I install the nvidia-driver.
OTHO, if I want to install the nvidia-driver, I should follow the instructions here for this job?
I also noticed that the above instruction is for Ubuntu 18.04, while I'm running Ubuntu 20.04. So, I still not so sure whether the method described there is suitable for my situation.
Best regards, HY
IMO, by saying GPU, we're talking about an advanced/expensive graphic card. But for my case, the graphic card is rather old and low-end as following:
$ lspci | grep -i nvidia 81:00.0 VGA compatible controller: NVIDIA Corporation G96C [GeForce 9500 GT] (rev a1)
So, I really don't know whether it can be referred as a GPU. So, I don't know whether should I install the nvidia-driver.
OTHO, if I want to install the nvidia-driver, I should follow the instructions here for this job?
I also noticed that the above instruction is for Ubuntu 18.04, while I'm running Ubuntu 20.04. So, I still not so sure whether the method described there is suitable for my situation.
Best regards, HY
The instruction you mentioned in the apollo-kernel repo is outdated. Just install CUDA 10.2 or 11.0 on your host from official https://developer.nvidia.com/cuda-downloads?target_os=Linux if you didn't need Apollo-RT kernel (for most users).
The instruction you mentioned in the apollo-kernel repo is outdated. Just install CUDA 10.2 or 11.0 on your host from official https://developer.nvidia.com/cuda-downloads?target_os=Linux
Still confused on your above notes. To be more specific, IMO, the cuda is a set of development toolkit for nvidia gpu, while the instructions here is for the installation of Nvidia driver.
Do you mean the installation of CUDA toolkit will also install the nvidia driver? And furthermore, as I've said, the machine for my case only has a very archaic graphical card -- G96C [GeForce 9500 GT] (rev a1) and I can't find any appropriated drivers for it from the NVIDIA drivers website. Is this really GPU?
if you didn't need Apollo-RT kernel (for most users).
Again, if not run with Apollo-RT kernel, do you mean I will not obtain the benefit supplied by the Apollo kernel, i.e., better supports Apollo sensor units to drive cameras and CAN cards, as told here.
Best regards, HY
The instruction you mentioned in the apollo-kernel repo is outdated. Just install CUDA 10.2 or 11.0 on your host from official https://developer.nvidia.com/cuda-downloads?target_os=Linux
Still confused on your above notes. To be more specific, IMO, the cuda is a set of development toolkit for nvidia gpu, while the instructions here is for the installation of Nvidia driver.
Do you mean the installation of CUDA toolkit will also install the nvidia driver? And furthermore, as I've said, the machine for my case only has a very archaic graphical card -- G96C [GeForce 9500 GT] (rev a1) and I can't find any appropriated drivers for it from the NVIDIA drivers website. Is this really GPU?
if you didn't need Apollo-RT kernel (for most users).
Again, if not run with Apollo-RT kernel, do you mean I will not obtain the benefit supplied by the Apollo kernel, i.e., better supports Apollo sensor units to drive cameras and CAN cards, as told here.
Best regards, HY
1) IIRC, CUDA toolkit comes bundled with NVidia driver installer. And Apollo master needs nvidia driver version > 440.33 . 2) You'd better check it yourself whether your graph card was GPU enabled. 3) As for Apollo-RT kernel, it was rather old, and lacks maintenance. I personally can't tell the benefit of choose it.
- You'd better check it yourself whether your graph card was GPU enabled.
What's the command/tool/utility for this purpose?
- As for Apollo-RT kernel, it was rather old, and lacks maintenance. I personally can't tell the benefit of choose it.
If so, why still there are so many tutorials/instructions in the apollo documents referring to the installation of apollo-kernel? See the following results given by grep in the top level directory of apollo git local repo:
$ find . -type f | grep -v '^\./\.git' | xargs -P0 grep -ilR 'apollo-kernel' 2>/dev/null | sort -u
./docs/FAQs/General_FAQs.md
./docs/howto/how_to_build_your_own_kernel.md
./docs/howto/how_to_install_apollo_kernel_cn.md
./docs/howto/how_to_install_apollo_kernel.md
./docs/quickstart/apollo_1_0_hardware_system_installation_guide_cn.md
./docs/quickstart/apollo_1_0_hardware_system_installation_guide.md
./docs/quickstart/apollo_1_0_quick_start_developer.md
./docs/quickstart/apollo_1_5_hardware_system_installation_guide_cn.md
./docs/quickstart/apollo_1_5_hardware_system_installation_guide.md
./docs/quickstart/apollo_2_0_hardware_system_installation_guide_v1_cn.md
./docs/quickstart/apollo_2_0_hardware_system_installation_guide_v1.md
./docs/quickstart/apollo_2_5_hardware_system_installation_guide_v1_cn.md
./docs/quickstart/apollo_2_5_hardware_system_installation_guide_v1.md
./docs/quickstart/apollo_software_installation_guide.md
./docs/specs/D-kit/Waypoint_Following/Apollo_Installation_cn.md
./docs/specs/Software_and_Kernel_Installation_guide_cn.md
./docs/specs/Software_and_Kernel_Installation_guide.md
./README.md
Especially for my case, I want to know whether I can obtain all functionalities for Apollo D-Kit without using the apollo-kernel. As you can see, the D-Kit document still include the instructions for installation of apollo-kernel.
Best regards, HY
- You'd better check it yourself whether your graph card was GPU enabled.
What's the command/tool/utility for this purpose?
- As for Apollo-RT kernel, it was rather old, and lacks maintenance. I personally can't tell the benefit of choose it.
If so, why still there are so many tutorials/instructions in the apollo documents referring to the installation of apollo-kernel? See the following results given by grep in the top level directory of apollo git local repo:
$ find . -type f | xargs -P0 grep -ilR 'apollo-kernel' . 2>/dev/null | sort -u ./docs/FAQs/General_FAQs.md ./docs/howto/how_to_build_your_own_kernel.md ./docs/howto/how_to_install_apollo_kernel_cn.md ./docs/howto/how_to_install_apollo_kernel.md ./docs/quickstart/apollo_1_0_hardware_system_installation_guide_cn.md ./docs/quickstart/apollo_1_0_hardware_system_installation_guide.md ./docs/quickstart/apollo_1_0_quick_start_developer.md ./docs/quickstart/apollo_1_5_hardware_system_installation_guide_cn.md ./docs/quickstart/apollo_1_5_hardware_system_installation_guide.md ./docs/quickstart/apollo_2_0_hardware_system_installation_guide_v1_cn.md ./docs/quickstart/apollo_2_0_hardware_system_installation_guide_v1.md ./docs/quickstart/apollo_2_5_hardware_system_installation_guide_v1_cn.md ./docs/quickstart/apollo_2_5_hardware_system_installation_guide_v1.md ./docs/quickstart/apollo_software_installation_guide.md ./docs/specs/D-kit/Waypoint_Following/Apollo_Installation_cn.md ./docs/specs/Software_and_Kernel_Installation_guide_cn.md ./docs/specs/Software_and_Kernel_Installation_guide.md ./.git/objects/pack/pack-f2b0df48a48f875e05e7b2a86f68c13352ed897d.pack ./README.md
Especially for my case, I want to know whether I can obtain all functionalities for Apollo D-Kit without using the apollo-kernel. As you can see, the D-Kit document still include the instructions for installation of apollo-kernel.
Best regards, HY
2) nvidia-smi 3) All the documents update will be done in the next few days, before Apollo 6.0 release.
Regards,
1. IIRC, CUDA toolkit comes bundled with NVidia driver installer. And Apollo master needs nvidia driver version > 440.33 .
Another relevant issue: is Apollo master also dependent on cuDNN or not?
NN or not?
Docker images in the master
branch comes bundled with cuDNN
. Although cuDNN is not directly used by Apollo modules,
At least TensorRT
has a dependency over cuDNN
, and libraries like libtorch may also depend on cuDNN to compile/work correctly.
$ find modules/ -name BUILD -exec egrep "cudnn|tensorrt" -i {} \;
name = "trajectory_imitation_tensorrt_inference",
srcs = ["trajectory_imitation_tensorrt_inference.cc"],
hdrs = ["trajectory_imitation_tensorrt_inference.h"],
"@local_config_tensorrt//:tensorrt",
":trajectory_imitation_tensorrt_inference",
"@local_config_tensorrt//:tensorrt",
NN or not?
Why you only quote so short part of my original posts?
Docker images in the
master
branch comes bundled withcuDNN
. Although cuDNN is not directly used by Apollo modules,At least
TensorRT
has a dependency overcuDNN
, and libraries like libtorch may also depend on cuDNN to compile/work correctly.$ find modules/ -name BUILD -exec egrep "cudnn|tensorrt" -i {} \; name = "trajectory_imitation_tensorrt_inference", srcs = ["trajectory_imitation_tensorrt_inference.cc"], hdrs = ["trajectory_imitation_tensorrt_inference.h"], "@local_config_tensorrt//:tensorrt", ":trajectory_imitation_tensorrt_inference", "@local_config_tensorrt//:tensorrt",
In your above result, only the tensorrt keyword has been found out. As for I don't have so precise perception on the source files of apollo, I run grep command like the following to confirm the cuDNN dependency:
$ egrep -ilR 'cudnn' . 2>/dev/null | sort -u
./.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/local_config_cuda/cuda/BUILD
./.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/local_config_cuda/cuda/cuda/cuda_config.h
./.cache/bazel/540135163923dd7d5820f3ee4b306b32/external/@local_config_cuda.marker
./.cache/bazel/540135163923dd7d5820f3ee4b306b32/java.log
./.cache/bazel/540135163923dd7d5820f3ee4b306b32/java.log.in-dev-docker.werner.log.java.20200824-050456.94023
./docker/build/build_cyber.sh
./docker/build/cyber.aarch64.dockerfile
./docker/build/cyber.x86_64.dockerfile
./docker/build/installers/install_nvidia_ml_for_jetson.sh
./docker/build/installers/install_opencv.sh
./docker/build/installers/install_tensorrt.sh
./modules/perception/camera/tools/obstacle_detection/data/yolo/3d-r4-half/model
./modules/perception/camera/tools/offline/data/perception/camera/models/lane_detector/darkSCNN/model
./modules/perception/camera/tools/offline/data/perception/camera/models/yolo_obstacle_detector/3d-r4-half/model
./modules/perception/camera/tools/offline/data/perception/camera/models/yolo_obstacle_detector/3d-yolo/model
./modules/perception/inference/tensorrt/BUILD_
./modules/perception/inference/tensorrt/plugins/BUILD_
./modules/perception/inference/tensorrt/plugins/softmax_plugin.cu
./modules/perception/inference/tensorrt/plugins/softmax_plugin.h
./modules/perception/inference/tensorrt/rt_common.h
./modules/perception/production/data/perception/camera/models/lane_detector/darkSCNN/model
./modules/perception/production/data/perception/camera/models/yolo_obstacle_detector/3d-r4-half/model
./modules/perception/production/data/perception/camera/models/yolo_obstacle_detector/3d-yolo/model
./modules/perception/proto/rt.proto
./modules/perception/testdata/camera/app/data/perception/camera/models/lane_detector/darkSCNN/model
./modules/perception/testdata/camera/app/data/perception/camera/models/yolo_obstacle_detector/3d-r4-half/model
./modules/perception/testdata/camera/app/data/perception/camera/models/yolo_obstacle_detector/3d-yolo/model
./modules/perception/testdata/lidar/lib/segmentation/cnnseg/data/perception/camera/models/lane_detector/darkSCNN/model
./modules/perception/testdata/lidar/lib/segmentation/cnnseg/data/perception/camera/models/yolo_obstacle_detector/3d-r4-half/model
./modules/perception/testdata/lidar/lib/segmentation/cnnseg/data/perception/camera/models/yolo_obstacle_detector/3d-yolo/model
./modules/perception/testdata/lidar/lib/segmentation/mst/data/perception/camera/models/lane_detector/darkSCNN/model
./modules/perception/testdata/lidar/lib/segmentation/mst/data/perception/camera/models/yolo_obstacle_detector/3d-r4-half/model
./modules/perception/testdata/lidar/lib/segmentation/mst/data/perception/camera/models/yolo_obstacle_detector/3d-yolo/model
./third_party/gpus/cuda/BUILD.tpl
./third_party/gpus/cuda_configure.bzl
./third_party/gpus/cuda/cuda_config.h.tpl
./third_party/gpus/find_cuda_config.py
./tools/bootstrap.py
Why you only quote so short part of my original posts?
Didn't notice that.
In your above result, only the tensorrt keyword has been found out. As for I don't have so precise perception on the source files of apollo, I run grep command like the following to confirm the cuDNN dependency:
$ egrep -ilR 'cudnn' . 2>/dev/null | sort -u ./docker/build/build_cyber.sh ./docker/build/cyber.aarch64.dockerfile ./docker/build/cyber.x86_64.dockerfile ./docker/build/installers/install_nvidia_ml_for_jetson.sh ./docker/build/installers/install_opencv.sh ./docker/build/installers/install_tensorrt.sh ./third_party/gpus/cuda/BUILD.tpl ./third_party/gpus/cuda_configure.bzl ./third_party/gpus/cuda/cuda_config.h.tpl ./third_party/gpus/find_cuda_config.py ./tools/bootstrap.py
Emm, I wrote (or adapted from other sources) many of the files in your grep result.
When we talk about "dependency", we generally mean "build time dependency" (build-deps) and "run-time dependency" (deps).
Besides, I don't think exposing too much internals to general users is a good idea, as the case of all those files under the third_party/gpus
directory ( They are there to provide CUDA support for Bazel, to once again meet your curiosity.
CPU|GPU Build
was fix in PR #12282 , and this issue is going to be closed.
Please create another issue if you have other questions, or reopen this one if the error still there.
Thanks for using Apollo!
I have same problem as title describe,and when i run ./apollo.sh build_opt_gpu I get: [WARNING] No nvidia-driver found. CPU will be used. [WARNING] No nvidia-driver found. CPU will be used. [INFO] Apollo Environment Settings: [INFO] APOLLO_ROOT_DIR: /apollo [INFO] APOLLO_CACHE_DIR: /apollo/.cache [INFO] APOLLO_IN_DOCKER: true [INFO] APOLLO_VERSION: master-2020-09-16-e6e51d58a6 [INFO] DOCKER_IMG: dev-x86_64-18.04-20200914_0742 [INFO] APOLLO_ENV: STAGE=dev USE_ESD_CAN=false [INFO] USE_GPU: USE_GPU_HOST=0 USE_GPU_TARGET=0 [WARNING] No nvidia-driver found. CPU will be used. [INFO] Configure .apollo.bazelrc in non-interactive mode You have bazel 3.4.1 installed. Found possible Python library paths: /usr/lib/python3/dist-packages /usr/local/lib/python3.6/dist-packages Found CUDA 10.2 in: /usr/local/cuda-10.2/targets/x86_64-linux/lib /usr/local/cuda-10.2/targets/x86_64-linux/include Found cuDNN 7 in: /usr/lib/x86_64-linux-gnu /usr/include Found TensorRT 7 in: /usr/lib/x86_64-linux-gnu /usr/include/x86_64-linux-gnu
I thinkt the problem is the nvidia-driver is not installed,but have installed nvidia driver in my host machine,when i run nvidia-smi i get the info below:
This mean that my driver is right, Can u tell me how to solve this problm?
Please be sure the nvidia-smi
command is issued from within docker. For running apollo in Docker containers leveraging the host's NVIDIA GPU, the nvidia-docker should be installed on the host first. See here for discussions that may be relevant to your question.
I think you forget to build the Apollo project by ./apollo.sh build
after you get into the container. ——Reply reference comes from the Apollo developer community issue master contest https://apollo.auto/developer/questiondata_cn.html?target=206
Hi,
This is Ubuntu 20.04 with apollo git master branch. I tried with the following steps:
Any hints for this problem?
Regards, HY