-
# Main Remark
Currently in tabnet architecture, a part of the output of Feature Transformer is used for the predictions (n_d) and the rest (n_a) as input for the next Attentive Transformer.
But…
-
**Are you using the latest release?**
funannotate v1.8.17
**Describe the bug**
I'm trying to run the test after installation using `funannotate test -t all --cpus 10` but it crashes in the `funan…
-
The current implementation of the Text Exploration with BERT project provides a solid foundation for predicting words for a single [MASK] token within a given piece of text. However, there are two sig…
-
## Problem
Currently, I am creating a model using images in my house for learning images.
The original model has not changed.
When predicting images, it becomes white like the image below.
I chang…
-
Hello
I ran the following commands:
```
funannotate train -i XZ1516_ragtag_correct_scaffold_masked_nameChange.fasta -o funannotate_run/ \
--left XZ1516_S1_R1_001.fastq.gz \
--right XZ…
-
**Bug description**
Running the example command for the MSA Transformer in the Variant Prediction example in https://github.com/facebookresearch/esm/tree/main/examples/variant-prediction results in a…
-
Thanks again for this awesome repo. It helps me a lot. I've got a question regarding which time_range to use for sampling subgraphs for test. For example, in [finetune_OAG_PF.py](https://github.com/ac…
-
Thanks for your great work, In transformer.py, I think the token_embed should be initialized via pretrained codebook by func **load_and_freeze_token_emb** during training. Look forward to your reply
-
Thank you for your efforts, but I have a question about MAE code.
https://github.com/lucidrains/vit-pytorch/blob/dc57c75478c98241fd232a64a7bb4c23c5861730/vit_pytorch/mae.py#L91
MSE loss was ca…
-
Hi, I trained a **langid model** with my dataset following these [steps](https://stanfordnlp.github.io/stanza/langid.html#training-your-own-model) and ending with this method:
```python
python -m st…