jasperzhong / read-papers-and-code

My paper/code reading notes in Chinese
45 stars 3 forks source link

EuroSys '19 | Supporting Very Large Models using Automatic Dataflow Graph Partitioning #38

Open jasperzhong opened 4 years ago

jasperzhong commented 4 years ago

https://arxiv.org/pdf/1807.08887.pdf

来自zihao推荐

jasperzhong commented 4 years ago

感觉我之前对model parallelism的理解有些问题. 找了些文章看了下.

https://medium.com/@esaliya/model-parallelism-in-deep-learning-is-not-what-you-think-94d2f81e82ed

image 这张图非常有趣.

所以总的来讲,很多人把workload partition作为model parallelism(我一直这么认为的,因为可以形成pipeline),但真正纯正的model parallelism是tensor partition那一派. 但workload partition还是算model parallelism只是没法parallel.

jasperzhong commented 4 years ago

粗浅的看了下.

首先介绍了下什么是tensor partition. 我觉得下图就说的很明白了. 可以对不同的维度做. 比如可以对batch维度做,也可以在input_channel那层做.

image

这里介绍了partition-n-reduce的概念. 就是operator c可以在多个worker上并行执行相同的op,结果O1 O2有两种办法合并成O,1)直接concat 2) 做某种reduce操作.

接着说,因为每个op都有不同的分法,那么整个图那么多op,分法就太多了. 这个优化是一个NP hard的问题. 后面这句话没看懂:

When there are 2^m GPUs, each input/output tensor of an operator can be partitioned along a combination of any 1, 2, ..., or m dimensions,

文章的出发点是想搞一套对每个operator都自动做tensor partition. 之前人工搞了一些operator,比如conv,但是几百个op不可能都人工去搞. 作者的解决办法为每个op搞了一个TDL(tensor description language). TDL用来描述output tensor是怎么从input tensor变过来的. TDL差不多是这样的lambda.

image

TDL包括下列信息 image

Tofu解析TDL来获得partition策略. 这段symbolic interval analysis没看懂....

看的太费劲了. 不看了.

jasperzhong commented 2 years ago

过了两年还是看不懂.