-
Phd track: 21 Nov 2021 - 21 Nov 2025
First step in 4 year phd track: get paper from master thesis work.
Venues: https://github.com/Tribler/tribler/wiki/Scientific-publication-venues-for-ledger-scien…
-
Hi, @unixpickle
Thanks for your awesome work and open source.
I met the `nan` issue when training on ImageNet 128x128,
```
-----------------------------
| lg_loss_scale | -1.62e+04 |
| loss …
-
**Describe the bug**
When
```
ot = sinkhorn_divergence.sinkhorn_divergence(
geom,
x=geom.x,
y=geom.y,
static_b=True,
sinkhorn_kwargs={"rank":10,"initializer":'random'})
r…
-
-
- [ ] 是否有人在QA上做过训练阶段的攻击
- [ ] VQA上的攻击主要是在做什么?
- [ ] NLP大方向上别人是怎么攻击的?
思考:
- 为什么通过注入极少量(50-200左右)有毒数据,trigger + fake answer,最后模型就会一遇到关键词就给fake answer,为什么会work, 背后的机理是过拟合嘛?和meta-learning相关嘛?
- 后门攻…
-
如题,如果想重建特征,为什么不能阻止模型是全1的平凡形式(即强行让输入等于输出)输出一个loss为0的解?即使说模型非常复杂,很难达到平凡参数,但是模型趋近于平凡参数,loss也会接近0.这样的参数是没有意义的。无论说中间模型的结构多么复杂,我们怎么能够确保模型是在学习数据本身的pattern而非单纯复刻数据呢?请问可以帮我解答一下这个问题吗?感谢!
-
_ToDo: determine phd focus and scope_
Phd Funding project: https://www.tudelft.nl/en/2020/tu-delft/eur33m-research-funding-to-establish-trust-in-the-internet-economy
Duration: 1 Sep 2023 - 1 sep 2…
-
The following DocBook code contains a text paragraph, followed by a number of admonition blocks:
*admonitions.dbk*
```xml
Admonitions
2020-12-02
Lorem ipsum dolor sit …
-
After creating the environment for tr2. I tried `sh scripts/abstract_trajectories/blockstacking/collect.sh`, but there were too mant errors. So I tried the first row in _**collect.sh**_, and the follo…
-
### Microsoft PowerToys version
0.79.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Search iclr, ijca…