-
In the instructions, DM proposes the following 3 scales:
- **Sexy**: How sexually appealing do you think the image is.
- **Arousal**: How much you felt your body to react to the image
- **Convinc…
-
We need to optimize the notebooks for:
- [x] MNIST hello world
- [x] OpenAI search
- [x] Benchmark OpenAI vs. sentence-transformers search
- [x] Train sentiment analysis using `transformers`
- [x] Mu…
-
微博内容精选
-
-
Image:
![Screenshot 2023-10-11 at 2 12 09 PM](https://github.com/vllm-project/vllm/assets/2249614/1351e4c8-23d1-443b-b4b2-1507914ff3db)
Fragment from vLLM logs:
```
experiences impro custome…
-
Hi! I am so glad that you have posted the code of Flex-VFL and I appreciate so much for this work.
I noticed that the multimodal dataset uses some special models and training strategies, such as CP…
-
https://github.com/huggingface/datasets/releases/tag/2.14.0
main changes:
- use `token` instead of `use_auth_token`
- the default config name is now `default` instead of `username--dataset_name`:…
-
**Submitting author:** @alawryaguila (Ana Lawry Aguila)
**Repository:** https://github.com/alawryaguila/multi-view-AE
**Branch with paper.md** (empty if default branch): joss
**Version:** v1.0.0
**Edi…
-
Congratulation for your work!. and may I ask if you could provide scripts of the extractor feature that produce .pkl files?
Thank you!
-
Example: https://mila.quebec/en/publications/
It would be nice to reuse the same code as in the Mila website. Not sure if that's 'easily' possible via RTD