-
[Grouped Query Attention](https://arxiv.org/abs/2305.13245) improves parameter-efficiency of attention KV projections and reduces IO at inference-time, making inference faster.
It can be implemente…
-
个人主页,个人学习生涯!
学习流程:
> 第一遍, 通读全文,了解内容
>
> 第二遍,针对性阅读,并记录心得
>
> 第三遍,理论结合实践
一点一点搬运到博客上
## ref
- [Deep learning papers reading roadmap](https://github.com/floodsung/Deep-Learning…
-
Hi authors,
Great work! Can you please share more details about the pointcloud alignment mentioned in your paper below:
> To ensure cohesion across various sources, we conduct preprocessing ste…
-
-
-
Hi,
Thanks for sharing the model and code with us.
I am trying to using Vision Foundation Model for a zero shot classification problem.
It is possible with **OpenGVLab/InternVL-14B-224px** bu…
-
Nice work! When I try to running the training code, I encounter the following error:
```
File "/ssddata/yuzhen/EVE/eve/model/language_model/eve_llama.py", line 96, in forward
clip_loss = self…
HYZ17 updated
4 months ago
-
_ToDo: determine phd focus and scope_
Phd Funding project: https://www.tudelft.nl/en/2020/tu-delft/eur33m-research-funding-to-establish-trust-in-the-internet-economy
Duration: 1 Sep 2023 - 1 sep 2…
-
@patrick-tssn Thanks for releasing the code!
I tried to train the model using the default code `finetune_video_image.slurm`. The result on MVBench turns out to be 47.3. I also tested the pre-traine…
-
@stevebottos Thanks for the really great code! I am learning a lot. Quick issue, I am not able to get the check_zero_shot_results.ipynb working (it may be based on some older versions of your code). C…