-
Hey @metinc , I was wondering what you think about this line in the examples refactor.
https://github.com/edbeeching/godot_rl_agents_examples/blob/72e4825651f1c9b078c3e7861b56267f78d0b0d6/examples/J…
-
**Describe the bug**
Since version 1.14.0 (tested here on 1.15.0), SmartScraperGraph on OpenAI stopped working with a pydantic-related error:
```
/venv/lib/python3.12/site-packages/google_crc32c/__…
-
https://github.com/AnandSingh-0619/home-robot/blob/79a7742ed4855482bf5cdd6a06429b3d5bea973a/projects/habitat_uncertainity/task/sensors.py#L105C1-L105C57
1. The YoloPerception class object is create…
-
### What happened + What you expected to happen
Dear ray team,
When attempting to initialize Ray with ray.init(local_mode=True), the Ray dashboard failed to start with a return code of 1.
my expe…
-
The alpha loss calculated in this repo via:
`alpha_loss = (self.alpha_log * (log_prob - self.target_entropy).detach()).mean()`
and self.target_entropy is initialized with `self.target_entropy = …
-
Currently, we use only a very limited number of observations (e.g. lift&drag, TKE, ...) and made them available to the RL agents. However, in literature the majority of current RL applications in flui…
-
Currently, depending on if a series has been matched in standard mode or multi-episode mode (i.e. tvdb2/3/4 mode), different sources of metadata will be used for each episode's Director and Writer. In…
KingJ updated
1 month ago
-
Hello, I am passing a custom gym environment to the DistributedD4PG.
Sample code:
```
distributed_agent = DistributedD4PG(environment=train_environment,
netw…
-
Hi,
Thanks for you great contribution to this repo. However, I found a missing function of `model.sample_multistep` as line:
https://github.com/mbreuss/consistency_trajectory_models_toy_task/blob/…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to create multi document agents using function calling as shown in here [structur…