huawei-noah / SMARTS

Scalable Multi-Agent RL Training School for Autonomous Driving
MIT License
956 stars 190 forks source link

[Help Request] Measure multiagent computational performance #1982

Closed olegsinavski closed 1 year ago

olegsinavski commented 1 year ago

High Level Description

Hello, I'm trying to understand the speed of SMARTs simulator in a multiagent environment. I'm trying to compute FPS on the simplest scenario:

Version

1.0.11

Operating System

No response

Problems

This is the simplest script I came up with:

def measure_fps(n_agent):
    scenarios = [
        str(Path(__file__).absolute().parents[1] / "scenarios" / "sumo" / "loop")
    ]

    agent_ids = ["Agent_%i" % i for i in range(n_agent)]

    agent_specs = {
        agent_id: AgentSpec(
            interface=AgentInterface.from_type(
                AgentType.LanerWithSpeed,
                max_episode_steps=None,
            ),
            agent_builder=Agent,
        )
        for agent_id in agent_ids
    }

    env = gym.make(
        "smarts.env:hiway-v0",
        scenarios=scenarios,
        agent_specs=agent_specs,
        headless=True,
        sumo_headless=True,
    )

    observations = env.reset()
    action = (0., 0, 0)
    timestamps = []
    num_agents = []
    for i in range(100):
        actions = {
            agent_id: action
            for agent_id, agent_obs in observations.items()
        }
        observations, rewards, dones, infos = env.step(actions)
        num_agents.append(len(observations.keys()))
        timestamps.append(time.time())

    dt = np.median(np.diff(timestamps))
    mean_agents_count = np.mean(num_agents)
    print("FPS:", 1. / dt, " Alive agents:", mean_agents_count)

    env.close()

I modified sum/loop scenario by commenting out background traffic:

gen_scenario(
    t.Scenario(
        # traffic={"basic": traffic},
        # social_agent_missions={
        #     "all": ([laner_actor], [t.Mission(route=t.RandomRoute())])
        # },

        bubbles=[
            t.Bubble(
                zone=t.PositionalZone(pos=(50, 0), size=(10, 15)),
                margin=5,
                actor=laner_actor,
                follow_actor_id=t.Bubble.to_actor_id(laner_actor, mission_group="all"),
                follow_offset=(-7, 10),
            ),
        ],
    ),

    output_dir=Path(__file__).parent,
)

When I visualize the environment, it does look like I expect: there are n red agents coming to stop. Here are my numbers: 3 agents - 32.2 5 agents - 17.69 10 agents - 9.9

Could someone please help me with the following questions:

Adaickalavan commented 1 year ago

Hi @olegsinavski,

Consider taking a look at the inbuilt diagnostic tool.

A minor change was required in the diagnostic report generation which is addressed by #1983. Additionally, this pull request also improves the instructions for the diagnostic tool.

olegsinavski commented 1 year ago

Hi @Adaickalavan, thank you, I managed to run the diagnostic scenarios. First, is there a documentation on various benchmarks there? I have problems understanding the differences between actors/sumo_actors/agents.

Also, looking at the code, I don't think it benchmarks what I'm interested in. It seems to instantiate social agents in scenarios, as opposed to actually controllable agents. Specifically, the _compute function in the benchmarks uses the following code to create env:

    env = gym.make(
        "smarts.env:hiway-v0",
        scenarios=scenario_dir,
        shuffle_scenarios=False,
        sim_name="Diagnostic",
        agent_specs={},
        headless=True,
        sumo_headless=True,
        seed=_SEED,
    )

Since agent_specs={} is empty, it doesn't seem to create controllable agents and hence doesn't exercise the multiagent control (even if dummy like in my script). Instead, it seems to create many "NPC" vehicles.

What do you think?

Gamenot commented 1 year ago

@qianyi-sun Please provide some input on this when you have the chance.

qianyi-sun commented 1 year ago

Hi @Adaickalavan, thank you, I managed to run the diagnostic scenarios. First, is there a documentation on various benchmarks there? I have problems understanding the differences between actors/sumo_actors/agents.

Also, looking at the code, I don't think it benchmarks what I'm interested in. It seems to instantiate social agents in scenarios, as opposed to actually controllable agents. Specifically, the _compute function in the benchmarks uses the following code to create env:

    env = gym.make(
        "smarts.env:hiway-v0",
        scenarios=scenario_dir,
        shuffle_scenarios=False,
        sim_name="Diagnostic",
        agent_specs={},
        headless=True,
        sumo_headless=True,
        seed=_SEED,
    )

Since agent_specs={} is empty, it doesn't seem to create controllable agents and hence doesn't exercise the multiagent control (even if dummy like in my script). Instead, it seems to create many "NPC" vehicles.

What do you think?

Hi @olegsinavski , yes the diagnostic scenarios are meant to be testing the general performance of SMARTS simulation with different type of vehicles, mostly, "NPC" vehicles. The difference between those actors is the following:

For your use case, since you want to test the performance with multi-ego-agents only, these scenarios might not be suitable to your case. Your script above and commenting out the traffic seems to be the right direction and maybe you can comment out the bubbles as well since bubble is mainly for converting background vehicles to social agents when they enter the bubble, it's might not necessary in your case as well.