-
1. All installation steps has been completed
2. Running `python models_server.py --config config.yaml`
3. Getting this output with error
```
Fetching 27 files: 100%|███████████████████████████████…
-
Server is up an running. Using gradio `python run_gradio_demo.py --config config.gradio.yaml`:
```
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launc…
-
When I first opened the software, he asked me to download the.zip archive, but when I finished downloading, it seemed to say that the hash value of the file could not be detected, and then I closed th…
-
**Meeting date/time (UTC)**
2022-01-28 18:00
**Instructions for joining the meeting**
https://meet.google.com/mdz-axok-ndz?authuser=0&hs=122
**Agenda**
*Note: Add a comment on this issue …
-
can you give me some advices to finetune git model on my own on dataset if finetuning has any sense(video captioning task)
-
### System Info
```
name : transformers
version : 4.26.0.dev0
```
### Who…
-
What are sequence-to-sequence language models and how are they related to transformer models?
-
Hello, How to use pre-trained BERT or GPT transformers for video captioning task using CNN features not vision transformer
-
Danny Driess et. al. [PaLM-E: An Embodied Multimodal Language Model](https://palm-e.github.io/).
-
Thanks for sharing your wonderful work.
I haven't read your paper yet, so based on demo video, I have some questions:
1- Is your PDVC model can be considered as live video captioning?
2- Is the c…