-
## Description
As I can understand, currently, BAML sends JSON schemas as part of the LLM prompt to structure outputs. With the recent introduction of GPT-4's Structured Outputs feature, we have an o…
-
## Description
Not sure where the contract is broken, probably on JupyterLab side but just as a heads up the outputs do not show up, an exception is thrown:
```
TypeError: e.toJSON is not a fun…
-
-
![optical_1](https://github.com/user-attachments/assets/8e32ded8-d80a-4cfb-9229-401056300a62)
the first contraption combines 4 beams and outputs 2048 su
![optical_2](https://github.com/user-attachme…
-
### Bug Description
when I call the flow by curl, it output a big json filled with output in flow 6 times, how can I get only 1 output.
The output text exists in :
1. outputs.outputs.results.text.d…
-
### System Info / 系統信息
transformers==4.44.0
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [ ] My own modified scripts…
-
### Relevant telegraf.conf
```toml
[agent]
interval = "5s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "5s"
…
-
### Describe the issue
There seems to be a memory corruption bug for model outputs that are also model inputs. Consider a model with an input that is also a model output:
![image](https://github.com…
-
### Confirm this is a Node library issue and not an underlying OpenAI API issue
- [X] This is an issue with the Node library
### Describe the bug
Since `4.59.0`, using structured outputs with…
-
环境
torch2.4+cu118
flashattention2 2.6.3
transformers 4.45.0.dev0
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct",
torch_dtype=torch.bfl…