Open SimoMaestri opened 2 years ago
Hi, the agent in the Jupyter Notebook was trained and tested with DDPG.
Theoretically, it is enough to evaluate only the neural network (between state and action). But in the implementation, the TensorFlow session is also needed.
But in principle, every neural network can be examined. And for the visualization, the corresponding SHAP values can simply be used.
Thus, the method is independent of the algorithm.
Hi,
thank you for your answer. I've got another question about the notebook. When you define the "explain" function you also define the model:
model = ([ddpg.S], ddpg.a)
Is ddpg.a the actor network? Because i want to know how can i apply SHAP if my algorithm is not actor-critic (like DQN).
Hi, yes exactly we are investigating in the notebook the actor network. This is a network with the states as inputs and the action as output.
As you have correctly stated, there is no actor network in the DQN. There you can examine the network between the states and the Q value.
This makes sense, because you can examine which states have a high importance for the network. For this you could then display the feature importance, for example.
Hi, thank you for your answer. I'll share my code because i have some problem and there are some parts that i really don't understand.
I've tried to print your ddpg.a and i've noticed that ddpg.a is a tensor that has model output dimension. So the first question is: why is this a tensor and not a network? Then i've tried to use ddpg.a.eval() to show the content of this tensor, but this gives me error and so i don't know what this tensor contains.
What i've done from the beginning is define the model and then i've trained it with model.learn:
model = DQN('MlpPolicy', "highway-fast-v0", policy_kwargs=dict(net_arch=[256, 256]), learning_rate=5e-4, buffer_size=15000, learning_starts=200, batch_size=32, gamma=0.8, train_freq=1, gradient_steps=1, target_update_interval=50, exploration_fraction=0.7, verbose=1, tensorboard_log="highway_dqn/") model.learn(int(2e4))
By doing this i've obtained my DQN trained model. Then i've followed your code and i use the eval function: `# python function to get the state_log and action_log def eval(video=False):
action_log = []
#state_log = np.array([])
state_log = []
env = gym.make("highway-fast-v0")
if video==True:
env = wrappers.Monitor(env, './videos/' + str(time()) + '/')
s = env.reset()
done = False
reward = 0
while not(done):
if video:
env.render()
a, _ = model.predict(s, deterministic = True) #ddpg.choose_action()
s_, r, done, info = env.step(a)
s = s_
reward +=r
action_log.append(a)
state_log.append(s)
return np.array(state_log), np.array(action_log)
and then:
state_log, action_log = eval()`
Now i can define feature_names and then i can also define the explain function in this way:
S = tf.placeholder(tf.float32, [None, s_dim, s_dim], 's') model = ([S], a) explainer = shap.DeepExplainer(model, state_log)
The problem is that i don't know how to fill 'a' variable. I've tried to solve this in 2 ways:
I've tried to fill 'a' variable with 'model.qnet' (i've done a = model.qnet), in DQN model.qnet returns the qnetwork, but code doesn't work. This because qnet function only returns the shape of the network but not the trained network. If i print 'model.qnet' this is what i've obtained: QNetwork( (features_extractor): FlattenExtractor( (flatten): Flatten(start_dim=1, end_dim=-1) ) (q_net): Sequential( (0): Linear(in_features=25, out_features=256, bias=True) (1): ReLU() (2): Linear(in_features=256, out_features=256, bias=True) (3): ReLU() (4): Linear(in_features=256, out_features=5, bias=True) ) )
Then i've tried to fill 'a' variable with action log. To do this i've just convert action_log list into a tensor and then i've passed it to shap.DeepExplainer. The code doesn't work.
Do you know how can i solve this problem and extract shap values? I mean what parameters i have to pass to shap.DeepExplainer function?
Hi, i have a question about the code: why all the training was done using SAC (i mean inside main.py file) and then in LongiControl_SHAP.ipynb a ddpg session was initialized? Is not possible to apply shap.DeepExplainer directly to trained model?
What i want to do is to extract SHAP values from a RL model trained whit TQC, PPO and HER. Do you think that this is possible?