ML-HK / paper-discussion-group

Discussion group of machine learning papers in HKUST
8 stars 0 forks source link

RECOMMEND/VOTE Papers #11

Open sxjscience opened 7 years ago

sxjscience commented 7 years ago

How to recommend

We can recommend some papers for further discussion under this issue. Include a link to the paper + the conference name and other related information (like the abstract, some basic descriptions, links to samples code or online demonstrations).

Please only include one topic per comment. For example, if you propose to discuss "paper X" which is heavily based on "paper Y" and you believe both have to be read together (possibly over multiple weeks) just create one comment for that. If you propose two unrelated papers please create two comments.

For example, the following markdown format could be used.

[**PAPER-NAME**](PAPER-LINK) (AUTHORS)
(CONFERENCE/JOURNAL)
_ABSTRACT_ 

How to vote

Please vote using the "Thumbs up" emoji :thumbsup:. Some papers will be marked as Discussed and you should only vote for the undiscussed papers.

sxjscience commented 7 years ago

See https://github.com/ML-HK/paper-discussion-group/issues/3 for more examples on how to recommend.

peterzcc commented 7 years ago

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic https://openreview.net/forum?id=SJ3rcZcxl&noteId=SJ3rcZcxl (Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine) ICLR 2017 Abstract: Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is the high sample complexity of such methods. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches, while TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments.

ckyeungac commented 7 years ago

Bounded Off-Policy Evaluation with Missing Data for Course Recommendation and Curriculum Design http://medianetlab.ee.ucla.edu/papers/LoggedStudents.pdf (William Whoiles and Mihaela van der Schaar,) ICML 2016 Abstract: Successfully recommending personalized course schedules is a difficult problem given the diversity of students knowledge, learning behaviour, and goals. This paper presents personalized course recommendation and curriculum design algorithms that exploit logged student data. The algorithms are based on the regression estimator for contextual multi-armed bandits with a penalized variance term. Guarantees on the predictive performance of the algorithms are provided using empirical Bernstein bounds. We also provide guidelines for including expert domain knowledge into the recommendations. Using undergraduate engineering logged data from a post-secondary institution we illustrate the performance of these algorithms.

sxjscience commented 7 years ago

Failures of Deep Learning https://arxiv.org/pdf/1703.07950.pdf (Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah) Arxiv 2017 Abstract: In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four families of problems for which some of the commonly used existing algorithms fail or suffer significant difficulty. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.

See also: https://simons.berkeley.edu/sites/default/files/docs/6455/berkeley2017.pdf https://simons.berkeley.edu/talks/shai-shalev-shwartz-2017-3-28