Optimizing and Benchmarking Large-Scale Deep Learning
Event Type
Machine Learning Day
AI/Machine Learning/Deep Learning
TimeWednesday, June 19th2:15pm - 2:45pm CEST
LocationPanorama 3
DescriptionWe introduce schemes to optimize communication in deep learning workloads. For this, we use properties of the standard SGD algorithm that allows us to delay the sending of some parts of the gradient updates. Our implementation SparCML speeds up practical workloads significantly. We then discuss Deep500: the first customizable benchmarking infrastructure that enables fair comparison of the plethora of deep learning frameworks, algorithms, libraries, and techniques. The key idea behind Deep500 is its modular design, where deep learning is factorized into four distinct levels: operators, network processing, training, and distributed training. Our evaluation illustrates that Deep500 is customizable (enables combining and benchmarking different deep learning codes) and fair (uses carefully selected metrics). Moreover, Deep500 is fast (incurs negligible overheads), verifiable (offers infrastructure to analyze correctness), and reproducible. Finally, as the first distributed and reproducible benchmarking system for deep learning, Deep500 provides software infrastructure to utilize the most powerful supercomputers for extreme-scale workloads.
Associate Professor