edbeeching / godot_rl_agents_examples

Example Environments for the Godot RL Agents library
MIT License
37 stars 13 forks source link

[Task] Normalize obs in MultiLevelRobot #23

Closed Ivan-267 closed 7 months ago

Ivan-267 commented 7 months ago

I forgot to normalize some of the obs in MultiLevelRobot:

edbeeching commented 7 months ago

It will be interesting to see the difference in performance

Ivan-267 commented 7 months ago

I've tried normalizing with the following code in AIController:

    var closest_coin = level_manager.get_closest_active_coin(robot.global_position, robot.current_level)
    var closest_coin_position: Vector3 = Vector3.ZERO

    if closest_coin:
        closest_coin_position = robot.to_local(closest_coin.global_position)
        closest_coin_position = closest_coin_position.limit_length(30.0) / 30.0

    var closest_enemy: Enemy = level_manager.get_closest_enemy(robot.global_position)
    var closest_enemy_position: Vector3 = Vector3.ZERO
    var closest_enemy_direction: float = 0.0

    if closest_enemy:
        var local_enemy_position = robot.to_local(closest_enemy.global_position)
        if local_enemy_position.length() <= 30.0:
            closest_enemy_position = local_enemy_position / 30.0
            closest_enemy_direction = float(closest_enemy.movement_direction)

image image

As the results are not notably better (using the same hyperparams, adjusting those could help), perhaps even a bit slower at solving the first level with coins, although probably similar after the 8 mill steps, I think there is not much of an advantage to update the env for now at least with this configuration.

Ivan-267 commented 7 months ago

Closing for now based on the results above.