-
**Describe the bug**
Dear expert,
I have met this error, thanks very much.
**To Reproduce**
Steps to reproduce the behavior:
1. Running the RL in carla.
2. See error
Traceback (most recent ca…
-
I am trying to generate Counterfactual Explanations for a NeuroTreeModel, also trained using `CounterfactualExplanations.jl`. While the model itself seems to work as expected (i.e., it is able to gene…
-
Hello,
when I tried to train an agent with this command line
```
python3 scripts/train_agent.py "./runs/cartpole_checkpoints" SB3_ON CartPole-v1 cuda '{"ALGO": "PPO"}' --save_freq=10000
```
the…
-
I am running local sweep controller and agents on [Niagra HPC](https://www.scinethpc.ca/niagara/)
On the login node, I successfully initialize the controller using
`$ wandb sweep --controller confi…
-
My environment is 2080 ti GPU, i9 CPU, 64G Rom, NVIDIA-SMI 470.161.03, Driver Version: 470.161.03, CUDA Version: 11.4.
After starting CARLA 0.9.11, I run "python3 dqn_train.py dqn_example/dqn_c…
-
I am trying to train an RL model using SAC and compare it to PPO by using the tutorial in [this notebook](https://colab.research.google.com/github/google-deepmind/mujoco/blob/main/mjx/tutorial.ipynb#s…
-
How to export trained model as a .pt (pytorch ) or ONNX model.
I have fully trained my model and want to deploy the model into the Unity ML agents Env. I have to export the trained model either i…
-
Hello,
I think I would be very helpful to have agents that are pre-trained on different gym environments. I am working on some transfer learning examples and it could be very helpful to have some b…
-
In train.py, I see a central agent,SL agent and RL agents. They are running in different CPU cores with multiprocessing package. And RL agents get the weights of policy and value network from central …
-
Hi,
Thanks a lot for your sharing.
I found the `main_challenge_manipulation_phase2.py` and `test_submission.py` very useful, and I am wondering how the results in `output/trained_agents` are pr…