-
### System Info
H100
### Who can help?
@byshiue
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in …
-
## Actual Behavior
In [the actual signature](https://github.com/STEllAR-GROUP/hpx/blob/master/libs/full/collectives/include/hpx/collectives/gather.hpp#L355) of `hpx::collectives::gather_there`:
``…
-
### Description of the bug:
Both of the following results in an error.
```python
model = genai.GenerativeModel(
model_name="models/gemini-1.5-pro-001",
system_instruction=(...),
…
-
### System Info
As in the title.
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
I tried to ins…
-
# Bug Report
## Problem
[comment]: # (A problem description)
On Keys generation apps getting crash
#### Expected behavior
Normal Keys Generation
#### Actual behavior
Crash
### Repro…
-
**Describe the bug**
Even VM created over 30 mins, it still shows `VirtualMachine generation is 2, but latest observed generation is 1`.
Note that we had similiar issue in `v1.3.0`, see #4909.
**…
-
### Description
When using compile-time logging source generation from `Micrososft.Extensions.Logging.Abstraction` version 8.0 and the message template provided:
```csharp
internal static partial…
-
**Describe the bug**
The behavior is a bit random. **When the text generation input size < batch size from the previous step** and replica > 1. The final output could missing some samples. This does …
-
As stated the generation tool is crashing partway into the process. Also I am not sure if it is related but around the same time this started happening and the couple times it finished I began having…
-
### Expected Behavior
Not 10Gb Vram eaten using the lora.
### Actual Behavior
I have flux fp8 schnell on a 3090, I run two loras rank 64 onto the model, but it uses all VRAM until it starts offload…