-
Is there intended to be support for the new gpt-4-vision-preview model soon?
-
Please provide support for the gpt-4-vision-preview model. The ChatCompletionMessage is currently not useful for gpt-4-vision-preview.
-
What should I specify as the `model_type` in the JSON file?
from transformers import AutoModel
model = AutoModel.from_pretrained("zxhezexin/openlrm-obj-base-1.1")
ValueError: Unrecogniz…
-
### Requested feature
Enhanced table extraction for complex table formats. Currently, Docling is able to identify the values correctly, but formatting is sometimes misaligned or unclear, especially i…
-
Trying to directly send a image/png to OpenAI completions API.
(Model: Gpt-4o / Mini)
Snippet:
```
if imageData != nil {
historyWithBinary = append(historyWithBinary, llms.MessageCont…
-
作者你好!非常感谢你优秀的工作,我有几点关于Visual Symptom Generator (VSG)的疑问想咨询你:
文章里你提到“the refined set is obtained by intersecting the initial coarse set with the response", 这里的intersecting是什么操作呢,看不太懂,是怎么通过第二次gpt respo…
-
```
$ python captions_generator.py --save_path synthetic_captions --generation_idx 0 --concept_bank_size -1 --me…
-
First ,thanks for this amazing project.
As GPT-4 vision chat completion endpoint was introduced in v0.7.8,
an update of example in README.md would be great.
-
I get the following issue when trying to generate code from a wireframe:
![image](https://github.com/excalidraw/excalidraw/assets/73549739/ec6604a7-7b3a-44cc-9148-9eaecd06f1b3)
Not sure how hard…
-
### Model introduction
GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction. It matches GPT-4 Turbo performance on text in English and code, with significant improve…