-
Hello!
I just read the paper [Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf), and wonder which 282 tasks are used in this paragraph:
> Second, increasing the …
-
What is the preferred strategy for fine-tuning: resuming training from pre-trained adapters(trained while pretraining) or creating a new adapter?
-
Hi, thanks for your brilliant work!
I am curious about why don't you combine the representation from vision and audio to video classification task, since you have got them already~~
Also can one…
-
1)for some reason esm_foldseek_model.py badly formatted.
2) I can't find config file which correspond to esm_foldseek_model.py
3) I can't find training data for this model on google drive.
4)I real…
-
Hi InternLM team, thank you for this open source contribution! InternLM looks like a really strong 7B model.
I think the research community would greatly benefit from learning about the training de…
-
kordc updated
11 months ago
-
### Title
ASK-RoBERTa: A pretraining model for aspect-based sentiment classification via sentiment knowledge mining
### Link
https://www.sciencedirect.com/science/article/abs/pii/S0950705122007584
…
-
- [x] Add a related work section: review other subgraph retrieval approaches
- [x] Positioning w.r.t SOTA
- [x] Clearly define the problem
- [ ] define retriever
- [x] Highlight the impact and the…
-
-
Go over the Multimodal Learning for Transformers survey.
Look into other papers and start documenting.
The idea is to have a list of papers with the following takeaways:
- Summary of the work.
- Key …