[x] MARCEL BRUCKNER the two archives pretrained_agents and expert_trajectories: these cannot stay in the repo, such huge files make git incredibly slow. (Yet I see the point we need them, no discussion on that.) --> Proposal: Create another, new archive in the GAIL-4-BARK github project, store the archives using git lfs there and define a bazel filegroup to access the files in the main repo, cf. https://git.fortiss.org/autosim/interaction_dataset/-/blob/master/BUILD (better Ideas welcome) --> https://github.com/GAIL-4-BARK/large_data_store
[x] MARCEL BRUCKNER provide a shell script to unzip the files to the correct locations. I failed the first time to execute the example as the path provided in gail_params.json ("expert_path_dir": "data/expert_trajectories/sac_20000_observations",) did not match anything in the archive. --> Obsolete with the https://github.com/GAIL-4-BARK/large_data_store
[x] MARCEL BRUCKNER Out of the box failing test: //bark_ml/tests/py_library_tf2rl_tests:rendered_tests FAILED in 24.7s <-- Several errors, maybe also related to wrong file locations. --> These were never intended to run as tests, as they are rendering to screen (Not possible in bazel test), changed them to be a py_binary
[x] MARCEL BRUCKNER Out of the box failing test: //examples:tfa FAILED in 11.1s <-- Im not sure what result I should expect here --> Intended to fail, as only runnable in the docker container (Patrick hardcoded some file paths)
[x] FERENC TÖRÖK Clean up the notebook: Either fill or delete the empty sections, make the "project task" section a documentation of what was done and not what was intended to be done. (otherwise the notebook and the gail.py example are really nice!)
[x] MARCEL BRUCKNER bark_ml/environments/single_agent_runtime.py --> Why? This change is highly non-generic, revert or improve --> [Quick fix: Replace [0]16 with [0] observer.observation_space.shape or similar] Fixes an index error if the ego agent is not valid anymore after the world.Step() function is called. Makes the highway and intersection blueprint runnable.
[x] FERENC TÖRÖK bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py --> why doesnt the tf2rl wrapper derive from single agent runtime? this introduces code dublication and unnecessary boilerplate, eg. the property defs for _scenario, self.action_space, self.oberservation_space. Or do I miss something? --> Clarified in mattermost
[x] FERENC TÖRÖK _normalize_observation() in bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py and normalize() in bark_ml/library_wrappers/lib_tf2rl/load_expert_trajectories.py --> why code dublication? --> normalization utils to encapsulate function
[x] MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf_agents/runners/tfa_runner.py --> revert the changes here and create a new class derived from TFARunner with the suff related to expert traj generation. --> Subclass SACRunnerGenerator created
[x] MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf2rl/generate_expert_trajectories.py --> I dont get line 215ff: try: observations[agent_id]["merge"] = obs_world.lane_corridor.center_line.bounding_box[0].x() > 900 reason? effect? ---> Delete completely as not used
[x] MARCEL BRUCKER general: param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"][....] can be shortened using local_params = param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"] plus local_params[...] --> Use local copy of the param_server dict for shorter notation
LEFT ISSUES
[ ] bark_ml/library_wrappers/lib_tf2rl/generate_expert_trajectories.py --> simulate_scenario(): why not use the bark runtime?
--> See mattermost
--> The world.Evaluate() function gives an empty info dict when replaying the dataset. What could be done is implement a new evaluator that wraps the measure_world() function and then add it as an evaluator to the runtime. Then the world.Evaluate() would give the desired infos. I think this is unnecessary complex at this point?!
[x] MARCEL BRUCKNER the two archives pretrained_agents and expert_trajectories: these cannot stay in the repo, such huge files make git incredibly slow. (Yet I see the point we need them, no discussion on that.) --> Proposal: Create another, new archive in the GAIL-4-BARK github project, store the archives using git lfs there and define a bazel filegroup to access the files in the main repo, cf. https://git.fortiss.org/autosim/interaction_dataset/-/blob/master/BUILD (better Ideas welcome) --> https://github.com/GAIL-4-BARK/large_data_store
[x] MARCEL BRUCKNER provide a shell script to unzip the files to the correct locations. I failed the first time to execute the example as the path provided in gail_params.json ("expert_path_dir": "data/expert_trajectories/sac_20000_observations",) did not match anything in the archive. --> Obsolete with the https://github.com/GAIL-4-BARK/large_data_store
[x] MARCEL BRUCKNER Out of the box failing test: //bark_ml/tests/py_library_tf2rl_tests:rendered_tests FAILED in 24.7s <-- Several errors, maybe also related to wrong file locations. --> These were never intended to run as tests, as they are rendering to screen (Not possible in bazel test), changed them to be a py_binary
[x] MARCEL BRUCKNER Out of the box failing test: //examples:tfa FAILED in 11.1s <-- Im not sure what result I should expect here --> Intended to fail, as only runnable in the docker container (Patrick hardcoded some file paths)
[x] FERENC TÖRÖK Clean up the notebook: Either fill or delete the empty sections, make the "project task" section a documentation of what was done and not what was intended to be done. (otherwise the notebook and the gail.py example are really nice!)
[x] MARCEL BRUCKNER bark_ml/environments/single_agent_runtime.py --> Why? This change is highly non-generic, revert or improve --> [Quick fix: Replace [0]16 with [0] observer.observation_space.shape or similar] Fixes an index error if the ego agent is not valid anymore after the world.Step() function is called. Makes the highway and intersection blueprint runnable.
[x] MARCEL BRUCKNER save the files once with the pep8 styleguilde, with two spaces indent and 80 chars line length, see https://github.com/bark-simulator/bark/blob/master/.vscode/settings.json --> Done
[x] FERENC TÖRÖK bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py --> why doesnt the tf2rl wrapper derive from single agent runtime? this introduces code dublication and unnecessary boilerplate, eg. the property defs for _scenario, self.action_space, self.oberservation_space. Or do I miss something? --> Clarified in mattermost
[x] FERENC TÖRÖK _normalize_observation() in bark_ml/library_wrappers/lib_tf2rl/tf2rl_wrapper.py and normalize() in bark_ml/library_wrappers/lib_tf2rl/load_expert_trajectories.py --> why code dublication? --> normalization utils to encapsulate function
[x] MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf_agents/runners/tfa_runner.py --> revert the changes here and create a new class derived from TFARunner with the suff related to expert traj generation. --> Subclass SACRunnerGenerator created
[x] MARCEL BRUCKNER bark_ml/library_wrappers/lib_tf2rl/generate_expert_trajectories.py --> I dont get line 215ff: try: observations[agent_id]["merge"] = obs_world.lane_corridor.center_line.bounding_box[0].x() > 900 reason? effect? ---> Delete completely as not used
[x] MARCEL BRUCKER general: param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"][....] can be shortened using local_params = param_server["Scenario"]["Generation"]["InteractionDatasetScenarioGeneration"] plus local_params[...] --> Use local copy of the param_server dict for shorter notation
LEFT ISSUES
world.Evaluate()
function gives an empty info dict when replaying the dataset. What could be done is implement a new evaluator that wraps themeasure_world()
function and then add it as an evaluator to the runtime. Then theworld.Evaluate()
would give the desired infos. I think this is unnecessary complex at this point?!BRANCHES