-
### 需求
能否支持会议(CCF 会议)查询?
### 版本信息
zotero version: 6.0.27
zotero-updateifsE version: 0.13.0
### 已有材料
* ccf推荐国际学术刊物目录:https://www.ccf.org.cn/Academic_Evaluation/By_category/
* easySchol…
-
Hi, @wondervictor, a huge shoutout for your remarkable contributions!
I've seamlessly integrated YOLO-World into [X-AnyLabeling](https://github.com/CVHub520/X-AnyLabeling), marking a significant ad…
-
Thanks for your work in anomaly detection domain. I am reaching out to discuss an aspect of your work that caught my attention, specifically regarding the experiments conducted in a zero-shot setting.…
-
It seems GPT like llama2 is more popular.
But the paper still use T5.
Compared to GPT, does it have any special advantages to use T5?
-
Thank you for nice work.
In training ViCLIP, I would like to clarify my understanding of this paper.
If vision transforms is not pre-trained such as MAE method, then, it means that it only align…
-
## Problem statement
1. Despite the impressive capabilities of large scale language models, the potential to modalities has not been fully demonstrated other than text.
2. Aligning parameters of vi…
-
Hi there - thanks for your great work on vision and language pre-training! I'm trying to run the codebase, but I am running into issues install `python-prctl`, since I do not have `sudo` access. Is th…
-
> Hugging Face Transformers is an open-source framework for deep learning created by Hugging Face. It provides APIs and tools to download state-of-the-art pre-trained models and further tune them to m…
-
Thank you for your significant contributions to Vision and Language Navigation.
I've been utilizing the bash pretrain_src/scripts/pretrain_r2r.bash script to pre-train the given 9 tasks. However, I…
-
- [ ] [MoAI/README.md at master · ByungKwanLee/MoAI](https://github.com/ByungKwanLee/MoAI/blob/master/README.md?plain=1)
# MoAI/README.md at master · ByungKwanLee/MoAI
## Description
![MoAI: Mixture…