HephaestusProject / pytorch-transformer

MIT License
0 stars 0 forks source link

[PAPER] Attention is all you need #1

Open inmoonlight opened 4 years ago

inmoonlight commented 4 years ago

πŸ“ Introduction

ν˜„μž¬ λ‹€μ–‘ν•œ NLP λͺ¨λΈμ˜ 근간이 되고 μžˆλŠ” Transformers μž…λ‹ˆλ‹€.

Why?

NLPμ—μ„œ 많이 λ‹€λ€„μ§€λŠ” λͺ¨λΈμ΄κΈ° λ•Œλ¬Έμž…λ‹ˆλ‹€.

Issue card

0. Tokenizer ν•™μŠ΅

NOTE: 이 repoμ—μ„œλŠ” ν•΄λ‹Ή λ‚΄μš©μ€ 이미 λ˜μ–΄μžˆλ‹€κ³  κ°€μ •ν•©λ‹ˆλ‹€.

1. Data Loader

2. Model

  1. Encoder
    1. Word Embedding
    2. Positional Encoding
    3. Multihead attention
  2. Decoder
    1. Word Embedding
    2. Masked multihead attention
  3. Inference
    1. Beam Search
  4. Utils
    1. LayerNorm

3. Trainer

  1. W&B
  2. Config Manager

Schedule

1. 2. 3.

Reference

seopbo commented 4 years ago

μ—‡ 저도 λ‹€μŒμ— ν•˜λ €κ³ ν–ˆλŠ”λ° γ…‹γ…‹ μ—¬κΈ° μ’€ κ»΄λ„λ˜λ‚˜μš” γ…Žγ…Ž

inmoonlight commented 4 years ago

@aisolab μ˜κ΄‘μ΄μ£ ! +_+ μ’‹μŠ΅λ‹ˆλ‹€