-
Hi,
Thank you very much for sharing the source code and model weights for BLIP-2. I had a general question about data scale for stage-1 and stage-2 training, it would be great to get your insights …
-
1) Login via Judge
2) Go to a case
3) Go to hearing tab
4) Click on 3 dots in list of hearings for a hearing in opt out stage
5) Click on View Transcript
![Image](https://github.com/user-attachments/…
-
**Describe the bug**
I am encountering a decoding error while using the DAC model in conjunction with HuggingFace models.
This issue seems to arise from discrepancies between
- the `dac.py` configu…
-
Is there any way to run the full pipeline replacing ChatGPT with LLaMA 2? It looks like if `use_llama` is set, LLaMA is used for only stage 1, but ChatGPT is still used for stage 2.
-
How to render novel_pose when using train_stage=2? Can you provide a script?
Best Regards,
-
您好,非常感谢您的开源贡献!
我在复现过程中遇到了两个问题。
第一个问题是:
我在使用 [[ecapa 英文预训练模型](https://www.modelscope.cn/models/iic/speech_ecapa-tdnn_sv_en_voxceleb_16k/summary)](https://www.modelscope.cn/models/iic/speech_ec…
-
Getting this warning in the terminal during stage 2.
```
2024-11-15 12:24:00,127 - [WARNING] Some sequences are longer than 2048 tokens. The longest sentence 2822 will be truncated to 2048. Consi…
-
Hi.
Is the workflow of the stage 2 training like this:
input -> forward denoising x4 -> vae decoding -> face detection -> arcface feature extraction?
This would consume too much memory if all gr…
-
**Description:**
**Where was the issue found:** * **Committee ID:** C00100495 * **Environment:** DEV/STAGE * **Browser:** Chrome
**Please describe the issue:** Change link for Update committee info …
-
Hi, I have some code like this:
```
locals {
level_2 = zipmap(
flatten([for key, value in var.management_groups : formatlist("${key}/%s", keys(value.children)) if value.children != null]),
…