joapolarbear / dl_notes

1 stars 1 forks source link

Optimizing Large-scale Deep Learning by Minimizing Resource Contention for Data Processing #32

Open joapolarbear opened 3 years ago

joapolarbear commented 3 years ago

Poster

Solution

  1. Use global sleep time instead of local cycle time to avoid oversleep
  2. Nonblocking Cache Synchronization
  3. Static CPU Resource Partitioning
  4. Graph Topology Exploitation, to ensure the tensor order.