-
## Keyword: detection
### Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value Functions
- **Authors:** Authors: Yun-Chun Chen, Adithyavairavan Murali, Balakumar Sundaralingam, Wei Ya…
-
### Within 7 days Conferences
- ACM WSDM(Web Search and Deep Mining) 2023 https://www.wsdm-conference.org/2023/
> 2/27~3/3, Singapore
- NDSS (Network and Distributed System Security Symposium) http…
-
# 💻 cs
## 📚 mask (total: 9)
### 📃 Deep Pneumonia: Attention-Based Contrastive Learning for Class-Imbalanced Pneumonia Lesion Recognition in Chest X-rays
- **Authors:** Xinxu Wei, Haohan Bai, Xianshi …
-
Hello!
I am trying to reproduce the training procedure of training RESDSQL with T5-base on text2sql task. I took the original `train.json` and `dev.json` SPIDER files from leaderboard (https://yale…
-
Appreciate your impressive work.
In the table 2 of the main paper, is the MSP, MaxLogit results reproduced on CLIP or CLIPN? I test the MaxLogit on CLIP (VitB-32) on CIFAR100 (id) and CIFAR10 (ood),…
-
Hello, I'm training StyleTTS2 in mandarin. But I'm confused about the OOD data, the OOD data should be multi-speaker data? Or can be single-speaker data?
-
## Keyword: differential privacy
### State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
- **Authors:** Chaoyu Zhang
- **Subjects:** Cryptography an…
-
First of all is it already possible to finetune a single speaker model?
If so, what should one pay attention to?
Second:
How do you prepare a dataset?
train and val is pretty clear but the OOD_t…
-
Hi team, when I am trying to use Vilt VQA for AVQA (https://adversarialvqa.org/)test by ViM, but it always gives me the error:
File "D:\multimodal_robustness\vilt_avqa.py", line 366, in
prin…
-
Thanks for the work.
Would be great if the difference of your work and [HELM benchmarks](https://crfm.stanford.edu/helm/latest/) can be mentioned in README somewhere. There seems to be lots of over…