-
Hi, respect for your awesome work! I have a question about the training. In backtracking stage, the generator's timestep is fixed to 399, and the timesteps of student and teacher are randomly sampled …
-
```
try:
config = ConfigurationManager()
prepare_callbacks_config = config.get_prepare_callback_config()
prepare_callbacks = PrepareCallback(config=prepare_callbacks_config)
callb…
-
I am curious about what parts of the config need to be modified when WavTokenizer trains the Large version on a larger data set? Could you please give me a reference configuration? In addition, can yo…
-
!pip install transformers datasets
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
from datasets import load_dataset, load_metric
tokenizer = GPT2Tokenizer.from_…
-
When the model converges to a relatively good situation, how much loss will be trained? In the current training code, the generator loss and the discriminator loss are? The generation effect of my fir…
-
### Feature request / 功能建议
Any plan to add support to style reference, seed image and control parameters?
### Motivation / 动机
This feature has been speculated in [this](https://naomiclarkson0.medi…
-
Hi,
I'm trying to train a multi output nn and I need to change the weight of each loss component depending on the epoch number. In some previous versions of keras I implemented this mechanism by defi…
-
!pip install transformers datasets
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
from datasets import load_dataset, load_metric
from transformers import GPT2LMH…
-
### Issue Type
Bug
### Source
pip (mct-nightly)
### MCT Version
PR #1186
### OS Platform and Distribution
Linux Ubuntu 22.04
### Python version
3.10
### Describe the issu…
-
I noticed that the training code does not set ‘requires_grad=False’ for the discriminator when training generators, which will conduct gradients to discriminators and force the discriminator to treat …