-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to create multi document agents using function calling as shown in here [structur…
-
I think the following error occurs when I run Mistral NEMO 12B. Would you mind fixing that in your github to accommodate this model? Thank you.
[rank0]: Traceback (most recent call last):
[rank0]…
-
After installing gym and gym[atari], the breakout doesn't work
`
import gym
env = gym.make("ALE/Breakout-v5")
observation = env.reset()
for _ in range(1000):
env.render()
action = env.actio…
-
# Why
#### As a
User of `pyCMO`
#### I want
to be able to easily train RL agents in `pyCMO`
#### So that
I can develop RL agents that can solve `pyCMO` scenarios
# Acceptance Criteria
##…
-
Can you help me specify the specific dependency version? especially `transformers`.
It would be great if you could help tell me all the versions in req.txt.
-
Hi, I ran `python3.8 train_generalization_experiment.py ` but failed. Logs are following:
Traceback (most recent call last):
File "train_generalization_experiment.py", line 16, in
from metad…
-
To be symmetric with the observation returned from the step() method, I think reset() method should also return info for arbitrary information about the game state.
This would be useful for certain…
-
### Describe the bug
The `CartPole` environment provides `reward==1` when the pole "stands" and `reward==1` when the pole has "fallen".
The [old gym documentation](https://github.com/openai/gym…
-
### Proposal
Below the introduction section on the [website](https://gymnasium.farama.org/), we include several pages on critical topics that provide an explanation in a short and less example base…
-
### ❓ Question
Hello,
I have implemented a custom Vectorized Environment using Mujoco (which adheres to stable baseline 3's VecEnv standard), but I haven't found any evidence of RL Zoo 3 supporting …