-
Hi I've trained a SAC+HER model and it performed well on the task. But i want to know the details about the model.
How should I do to get the detail table like model.summary() function used in ker…
-
Game of openAI "CarRacing-v0" has actions of 3 dimensions, relatively in rage [-1,1], [0,1] and [0,1].
But from your code, There is only code that bounds action and log_Prob into range [-1,1], no co…
-
The RLLib converges slowly on a simple environment compared to comparable algorithms with different libraries under same conditions (see below the results). Is this something that is expected or is th…
-
**Important Note: We do not do technical support, nor consulting** and don't answer personal questions per email.
Please post your question on the [RL Discord](https://discord.com/invite/xhfNqQv), [R…
-
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetec…
-
**Important Note: We do not do technical support, nor consulting** and don't answer personal questions per email.
Please post your question on the [RL Discord](https://discord.com/invite/xhfNqQv), [R…
-
I'm trying to train DetectoRS ,But there are problems.The same dataset can be trained on maskrcnn.
CUDA:11.6
CUDNN:8.4.1.50
TensorRT:8.2.3.0
mask_rcnn_r50_fpn_mstrain-poly_3x_coco-f.py:
```
_bas…
-
Hi Petros,
in your SAC_discrete code you are using the following in `SAC_Discrete.py`:
```
min_qf_next_target = action_probabilities * (torch.min(qf1_next_target, qf2_next_target) - self.alpha * …
-
@mugiwarakaizoku
I'm having trouble getting SAC to learn Cartpole effectively. Below is sample output of one of the better trials, but in most trials, it can't even break above a total reward of 1…
-
- [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] RL algorithm bug
+ [ ] documentation request (i.e. "X is missing from the documentation.")
+ [x] ne…