-
**TL;DR** - Opus Clip is a generative AI video tool that repurposes long-talking videos into shorts in one click. Powered by OpenAI.
**Website:**
https://www.opus.pro
-
https://blog.openai.com/openai-baselines-dqn/
*... In the DQN Nature paper the authors write: “We also found it helpful to clip the error term from the update [...] to be between -1 and 1.“. There ar…
-
Great work! I've read the paper and it seems the `LLaVA+S^2` is implemented with OpenCLIP VIsion Encoder, and the LLM is finetuned with LoRA. However, the LLaVA baseline you compared with is implement…
-
Hello everyone,
First off, a big thanks to city96 for the awesome work they've been contributing to the community. It's been incredibly helpful!
Here are my system specs:
Processor: Intel i5-13…
-
I love your project, I want to use it with local ollama+llava and i tried many way including asking chat gpt.
I am on Windows 11, i tried docker and no go. changed api address from settings in front…
-
When listing the available models via `clip.available_models()` ViT-L-14 (as well as RN50x64) are not included. I tried to pip install the repository again via `pip install git+https://github.com/open…
-
I always meet the error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Ca…
-
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
I am able to trace the model with `torch.jit.trace`, and I know the shape of input tensors. I don't think I am using any tensors outside the GPU, but I keep getting the error msg when trying to trace …
-
currently i'm using both of the commands below to test yolo-world but got different performance and results with the online demo, i would like to know which config file and weight are used in huggingf…