-
- [x] At affection level 2 and higher, a Pokémon will gain 1.2 times (approximated as 4915/4096 in Generation VII the normal experience from battles.
- [x] At affection level 3 and higher, a Pokémon …
-
I am training a BERT-base model for Chinese. Default MLM and NSP tasks are used. I am trying to train the model for 96k steps to see if it benefits from longer training procedure. However, from step 6…
-
With the release of v1.21.0 on Lodestar, we've included the [new default graffiti format standard](https://github.com/ethereum/execution-apis/pull/517) to help better identify client pairs on mainnet …
-
We train the model using our own dataset following the instruction provided in this repo and the token accuracy is very high and the loss is relatively low. However, after creating the pytorch_model.b…
-
I took your code after debugging and it can run smoothly, but the accuracy rate of the run is very low, only 20 to 30%. I tried many batchsizes, and the result is still the same after many epochs. It…
-
Hello, Thank you very much for conceiving the MDMLP model. Not only the amount of parameters is very small, but also a high accuracy rate is obtained, which is very innovative. Now MDMLP is at two-dim…
-
Hello!
I really like the principle behind your idea of separated softmax, however while trying to implement it in my class incremental framework I noticed **a possible downside** of your approach: it…
-
Hi, I notice you set the batch size to 12 so there is a high chance to have multiple positive samples in one batch. It seems you did not explicitly handle this problem. Instead, these positive samples…
-
Hello, I not sure where to start with troubleshooting the following issue.
I am trying to train NLA-SLR on WLASL-2000, when training Video-64 top-1 per-class accuracy seems to be stuck on 0.05 - as…
-
Considering the fraction of signals versus total backgrounds, there is a high skewed data which ends up very low accuracy and F1 score for all ML models. Is there any way to fix this like: including o…