facebookresearch / rlmeta

RLMeta is a light-weight flexible framework for Distributed Reinforcement Learning Research.
MIT License
284 stars 28 forks source link

Add more logging + ability to push to more downstream models #40

Closed EntilZha closed 2 years ago

EntilZha commented 2 years ago

This PR adds:

bcui19 commented 2 years ago

I was wondering why switch to rich based logging vs normal python logging? Also if we're changing to rich based logging, would you want to propagate it through all of the 'examples' as well? @xiaomengy not sure if you have any thoughts on how we should do logging in the code base.

EntilZha commented 2 years ago

The main reasons I swapped is that in the past (and now too), (1) I've had trouble getting python to actually output what I need (configure log level correctly) + (2) rich does a nice job of colonizing input. I'm not married to it, but I do like it quite a bit :).

I'd also be curious about how to do the downstream models, adding it how I did isn't exactly super clean, but I didn't really see another way. You need access to the extra downstream models in inside the train call, but don't really want them to be passed in I think, hence the optional arg.

bcui19 commented 2 years ago

re: downstream models yeah I think it might be better if we made it a class variable. If we need to wait for the other inference servers to connect, then we could make add/set functions to downstream models (also maybe remove if downstream model nodes go offline).

bcui19 commented 2 years ago

LGTM!