AkiraTOSEI / ML_papers

ML_paper_summary(in Japanese)
5 stars 1 forks source link

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures #127

Open AkiraTOSEI opened 3 years ago

AkiraTOSEI commented 3 years ago

TL;DR

A study of Direct Feedback Alignment, which allows us to parallelize parameter updates in a variety of tasks, such as the Transformer model, Graph Conv, and recommendation, and they confirm that the results are not bad.

Table 2 AUC (higher is better) and log loss (lower is better) of recommender systems trained on the

Why it matters:

Paper URL

https://arxiv.org/abs/2006.12878

Submission Dates(yyyy/mm/dd)

2020/06/23

Authors and institutions

Julien Launay, Iacopo Poli, François Boniface, Florent Krzakala

Methods

Results

Comments