-
The libraries in the Saguenay region of Québec have an [excellent video for patrons](https://www.youtube.com/watch?v=hI9Woy9a_g8), on how to use the scope.
The video is of course in French. It woul…
-
Hi, may I know is the code for training the Sinkhorn network available? Currently, I found that only test_region_set.py has uses pretrained Sinkhorn Network, but I'm interested to know how it was trai…
-
# Proposal: Improved Japanese text support in Closed Captions for media
## Summary
We've gotten a request to improve the way we handle Japanese text for closed captions. Today, we render these ho…
-
Hi, thanks for your amazing work.
Now I want to train your image caption model on my own dataset. For the reason of experimental setting, I can only use texts and image region feature. During train…
-
[paper](https://arxiv.org/abs/2104.08718)
## TL;DR
- **I read this because.. :** clip score에 관심 있어서
- **task :** evaluation for captioning
- **problem :** 이전의 reference 기반의 evaluation은 친숙한…
-
Hello!
I have a question about extracting region features for image captioning:
- in VinVL paper, it states that 2048 region features are stacked with 6 positionally encoded features (bbox, its h…
-
### Feature Description
Persons have an occupation type and, until some time ago, also had an occupation facility and the corresponding region, district, and community. However, these infrastruct…
-
**Problem:**
Cannot set default certificate via Key Vault.
**Screenshot:**
![image](https://github.com/user-attachments/assets/3aa1f1f0-2a9c-45d8-89e6-2e78862fa316)
![image](https://github.com/…
-
NTSC closed caption data overlaps the first line of the active region, so it's visible (and distracting) in the output:
![mpv-shot0252](https://user-images.githubusercontent.com/436317/71846933-9e4…
-
Hi @jeromedockes
I know that pubget extracts the `table_id` and `table_label` along side the coordinates.
Does pubget have the ability to, or is there interest in expanding pubget to extract mo…