bark-simulator / bark

Open-Source Framework for Development, Simulation and Benchmarking of Behavior Planning Algorithms for Autonomous Driving
https://bark-simulator.github.io/
MIT License
289 stars 69 forks source link

Installation failures. #315

Closed ianamason closed 4 years ago

ianamason commented 4 years ago

I am trying to get bark up and running inside a fresh Ubuntu 18.04 vagrant box. I think you should mention the

pip install virtualenv==16.0.0 

In the FAQ about Python.h not found. Searching closed issues shouldn't be necessary for a vanilla build.

Any how the thing (bazel test //...) now fails here:

ERROR: /home/vagrant/bark/modules/models/behavior/motion_primitives/BUILD:1:1: C++ compilation of rule '//modules/models/behavior/motion_primitives:motion_primitives' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF ... (remaining 412 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from ./modules/world/objects/agent.hpp:14:0,
                 from ./modules/world/world.hpp:19,
                 from ./modules/models/behavior/longitudinal_acceleration/longitudinal_acceleration.hpp:11,
                 from ./modules/models/behavior/idm/idm_classic.hpp:13,
                 from ./modules/models/behavior/idm/idm_lane_tracking.hpp:12,
                 from ./modules/models/behavior/motion_primitives/primitives.hpp:12,
                 from ./modules/models/behavior/motion_primitives/macro_actions.hpp:11,
                 from modules/models/behavior/motion_primitives/macro_actions.cpp:7:
./modules/world/goal_definition/goal_definition.hpp: In member function 'virtual const Polygon& modules::world::goal_definition::GoalDefinition::GetShape() const':
./modules/world/goal_definition/goal_definition.hpp:29:65: warning: no return statement in function returning non-void [-Wreturn-type]
     virtual const modules::geometry::Polygon& GetShape() const {}
                                                                 ^
In file included from ./modules/models/behavior/motion_primitives/macro_actions.hpp:11:0,
                 from modules/models/behavior/motion_primitives/macro_actions.cpp:7:
./modules/models/behavior/motion_primitives/primitives.hpp: In constructor 'modules::models::behavior::primitives::Primitive::Primitive(const ParamsPtr&, const DynamicModelPtr&)':
./modules/models/behavior/motion_primitives/primitives.hpp:51:19: warning: 'modules::models::behavior::primitives::Primitive::dynamic_model_' will be initialized after [-Wreorder]
   DynamicModelPtr dynamic_model_;
                   ^~~~~~~~~~~~~~
./modules/models/behavior/motion_primitives/primitives.hpp:50:9: warning:   'float modules::models::behavior::primitives::Primitive::integration_time_delta_' [-Wreorder]
   float integration_time_delta_;
         ^~~~~~~~~~~~~~~~~~~~~~~
./modules/models/behavior/motion_primitives/primitives.hpp:33:12: warning:   when initialized here [-Wreorder]
   explicit Primitive(const commons::ParamsPtr& params,
            ^~~~~~~~~
virtual memory exhausted: Cannot allocate memory
INFO: Elapsed time: 118.796s, Critical Path: 21.23s
INFO: 81 processes: 81 linux-sandbox.
FAILED: Build did NOT complete successfully
//examples:od8_const_vel_one_agent                                    NO STATUS
//examples:od8_const_vel_two_agent                                    NO STATUS
//examples:planner_uct                                                NO STATUS
//examples:planner_uct_benchmark                                      NO STATUS
//examples:scenario_dump_load                                         NO STATUS
//examples:scenario_video_rendering                                   NO STATUS
//examples:scenarios_from_database                                    NO STATUS
//modules/benchmark/tests:py_benchmark_analyzer_tests                 NO STATUS
//modules/benchmark/tests:py_benchmark_process_tests                  NO STATUS
//modules/benchmark/tests:py_benchmark_runner_tests                   NO STATUS
//modules/commons/tests:params_tests                                  NO STATUS
//modules/commons/tests:py_commons_tests                              NO STATUS
//modules/commons/tests:util_tests                                    NO STATUS
//modules/geometry/tests:geometry_test                                NO STATUS
//modules/geometry/tests:py_geometry_tests                            NO STATUS
//modules/models/tests:behavior_idm_classic_test                      NO STATUS
//modules/models/tests:behavior_longitudinal_acceleration_test        NO STATUS
//modules/models/tests:behavior_mobil_test                            NO STATUS
//modules/models/tests:behavior_motion_primitive_test                 NO STATUS
//modules/models/tests:behavior_static_trajectory_test                NO STATUS
//modules/models/tests:dynamic_tests                                  NO STATUS
//modules/models/tests:execution_test                                 NO STATUS
//modules/models/tests:py_behavior_model_test                         NO STATUS
//modules/runtime/tests:py_evaluation_tests                           NO STATUS
//modules/runtime/tests:py_importer_tests                             NO STATUS
//modules/runtime/tests:py_interaction_dataset_reader_test            NO STATUS
//modules/runtime/tests:py_param_server_tests                         NO STATUS
//modules/runtime/tests:py_runtime_tests                              NO STATUS
//modules/runtime/tests:py_scenario_generation_tests                  NO STATUS
//modules/world/tests:agent_test                                      NO STATUS
//modules/world/tests:map_interface_test                              NO STATUS
//modules/world/tests:observed_world_test                             NO STATUS
//modules/world/tests:opendrive_tests                                 NO STATUS
//modules/world/tests:py_agent_tests                                  NO STATUS
//modules/world/tests:py_map_interface_tests                          NO STATUS
//modules/world/tests:py_opendrive_tests                              NO STATUS
//modules/world/tests:py_road_corridor_tests                          NO STATUS
//modules/world/tests:py_roadgraph_test                               NO STATUS
//modules/world/tests:py_system_tests                                 NO STATUS
//modules/world/tests:py_world_tests                                  NO STATUS
//modules/world/tests:road_corridor_tests                             NO STATUS
//modules/world/tests:roadgraph_test                                  NO STATUS
//modules/world/tests:world_test                                      NO STATUS
//python/tests:py_pickle_tests                                  FAILED TO BUILD

FAILED: Build did NOT complete successfully

I am using

gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

Any ideas?

juloberno commented 4 years ago

Hi Ian,

we added the remark on the virtualenv version to installation instructions. Thanks, for the notice.

Your failure "virtual memory exhausted: Cannot allocate memory" has not come up yet during our builds. Likely, it has something to do with the vagrant box build. 1st Option: Reduce the number of parallel bazel build jobs with "bazel test //... --jobs=1". However, for the long run, long build times will be annoying 2nd Option: How is the swap space in the vagrant box configured? Try to set it to a reasonable amount.

Best Julian

patrickhart commented 4 years ago

317 Updated install.md

ianamason commented 4 years ago

Gosh @patrickhart I didn't even see the memory exhaustion in amongst all the other chaff. I was using 4GB of RAM, but with limiting the jobs to 1 it went through fine. I will boost the RAM to 8GB and see if that helps.

Executed 44 out of 44 tests: 44 tests pass.
INFO: Build completed successfully, 430 total actions

Thanks for the help.

ianamason commented 4 years ago

With 8GB all the tests complete, and all but one pass.

//modules/benchmark/tests:py_benchmark_runner_tests                      FAILED in 10.2s
  /home/vagrant/.cache/bazel/_bazel_vagrant/b31675c09bcf797303603580e7679a6f/execroot/bark_project/bazel-out/k8-fastbuild/testlogs/modules/benchmark/tests/py_benchmark_runner_tests/test.log

Executed 44 out of 44 tests: 43 tests pass and 1 fails locally.

The log of the last test also has complaints that indicate memory issues. So I will try again with 16GB.

ianamason commented 4 years ago

16GB is a charm

Executed 44 out of 44 tests: 44 tests pass.
juloberno commented 4 years ago

Okay, perfect. Then, I can close the issue for now?

ianamason commented 4 years ago

Sure close it. Thanks for your help.

Sent from my iPhone

On Mar 18, 2020, at 7:06 AM, Julian Bernhard notifications@github.com wrote:

 Okay, perfect. Then, I can close the issue for now?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.