So,I was doing some brush up on RL and uptil now what I have seen is that most Deep RLs like actor-critic/DDPG prefer to use MLP/fully connected layers.Now recently I came across the openai request for research ,where in they mentioned they would like us to investigate the effect of regularisation on different RL.One reason why there is no benefit in using regularisation is that RLs don't use complex models like ResNet
So the question is,are you aware of any work where the depth of the network in reinforcement learning is similar to some of the famous deep neural nets like SSD,YOLO etcIf yes can you please upload those links.
Hi @sparshgarg23,
No, I am not aware of any work about with such sophisticated architectures (though I am admittedly not a deep RL expert).
MLP / CNN / LSTM are definitely preferred in most papers.
So,I was doing some brush up on RL and uptil now what I have seen is that most Deep RLs like actor-critic/DDPG prefer to use MLP/fully connected layers.Now recently I came across the openai request for research ,where in they mentioned they would like us to investigate the effect of regularisation on different RL.One reason why there is no benefit in using regularisation is that RLs don't use complex models like ResNet So the question is,are you aware of any work where the depth of the network in reinforcement learning is similar to some of the famous deep neural nets like SSD,YOLO etcIf yes can you please upload those links.