-
Hi @https://github.com/ChocoWu @ChocoWu
Sorry for any inconvencience brought to you like this. I am research student currently at Georgia Tech, working on Mulitmodals and had been working on around…
-
Add support to multimodal models as discussed with @haileyschoelkopf
- This PR #1832 would be a great starting point.
- List all tasks we want to support for the first iteration.
-
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
##…
-
Glad to be the first one here!
Looking forward to the final release version of epiScanpy and it's full tutorial and the download link of test data.
By the way, Could epiScanpy be used as a multi-mod…
-
OpenAI has finally launched support for audio modality in its chat completion API, both as input and as output.
We should start by supporting the input audio modality, in line with the existing API…
-
### Feature request
We would like to explore implementing [`ell`](https://github.com/MadcowD/ell) throughout the OpenAdapt codebase.
### Motivation
From https://x.com/wgussml/status/183361586413194…
-
Analyze the different schemes and check if slow to equilibrate means multimodality.
-
## Motivation #
There is significant interest in vLLM supporting encoder/decoder models. Issues #187 and #180 , for example, request encoder/decoder model support. As a result encoder/decoder supp…
-
I ran the evaluation script for the provided checkpoint and found the results a bit different from the paper-reported results.
Especially, the FID and R-precision are higher, but the MultiModality an…
-
I dowloaded checkpoint following 3.1 and running the app.sh following section 4. However, I found all multi modality instructions are not working properly.
```python
all_gen_img_idx: []
all_g…