google-deepmind / meltingpot

A suite of test scenarios for multi-agent reinforcement learning.
Apache License 2.0
562 stars 110 forks source link

Guidance on Achieving a Static Frame Perspective in Agent Observations #239

Closed neuronphysics closed 3 weeks ago

neuronphysics commented 2 months ago

Hello,

In our work with RGB images within dynamic environments, I have encountered challenges with the 'RGB' observations rotating in response to the agent's discrete actions. This rotation affects the predictability of neighboring cells in the images, making the scenes appear jumpy rather than changing smoothly. Such movements complicate any methods to isolate and track moving objects across consecutive frames, as the abrupt changes in direction add unnecessary complexity to object tracking tasks.

Could you provide advice or recommend techniques for changing the setup an environment (i.e. territory room) a static camera perspective for an agent, even as it moves? I am particularly interested in any methods that would help me obtain a consistent visual reference point, which I believe could enhance my use of object tracking methods in a policy network.

Any suggestions or insights from your team at DeepMind would be greatly appreciated. Thanks in advance.

jzleibo commented 2 months ago

Take a look at how the Avatar component is defined in the substrate config, it can take a kwarg useAbsoluteCoordinates, try setting it to True. That might do what you want. The Avatar component has some other properties you might find useful for this too. I believe when I did this kind of thing in the past I sometimes found it helpful to make the view window square, and possibly also to set centered=True in the view parameters. I can't remember what centered=True is for, but I think it maybe had something to do with this. The best thing to do is to run it in the human player mode and experiment.

You can see more explanations of all the parameters of the Avatar component here.

neuronphysics commented 2 months ago

Hi, Thanks for the response. I played around with the parameters in the library for hours last night. I think I found a solution which in this case the avatar is not centred. Here is the way I changed the code to got more and less what I wanted:

1. Go to lua/modules/avatar_library.lua line 168 and change
self.gameObject:moveRel(_COMPASS[actions['move']]) to self.gameObject:moveAbs(_COMPASS[actions['move']])
2. Add orientation = 'N' line at line 271

What do you think about this approach?

jzleibo commented 2 months ago

That's OK, but it's close to what useAbsoluteCoordinates already does. Is there some reason to prefer that approach over useAbsoluteCoordinates?

neuronphysics commented 1 month ago

Hi,

I have a question regarding object tracking methods like slot attention in the MeltingPot environment. Specifically, I am interested in knowing whether it is possible to increase the size of agents by a few more pixels within various MeltingPot environments. Could you provide help on how to achieve this (I am wondering whether it makes sense given different tasks), or direct me to the relevant parts of the codebase where such adjustments can be made?

Thank you!

jzleibo commented 1 month ago

What happens if you change the spriteSize parameter in the substrate_definition at the bottom of the substrate config files (e.g. here) ? It should automatically try to scale the sprites to the size you provide. I would expect it to work best if you pick a multiple of the existing size, which is 8. So maybe try 16 or 32.

Does it crash when you change that though? It's possible we assumed that size somewhere else. Let me know where you get stuck and I might know solutions.

duenez commented 3 weeks ago

Closing as there's already good answers here. Feel free to re-open if you have more questions.