Solving IO Bottlenecks for Deep Learning on Large Scale Brain Images
Machine Learning Day
AI/Machine Learning/Deep Learning
Big Data Analytics
TimeWednesday, June 19th11am - 11:22am CEST
DescriptionThe use of Deep Learning methods has been identified as a key opportunity for enabling processing of extreme-scale scientific datasets. Facilitating processing of these datasets thus requires the ability to store petabytes of data as well as to access the data with very high bandwidth. Many HPC clusters still follow the Beowulf architecture , where the compute nodes have little or no storage space integrated within the node. For cytoarchitectonic brain mapping, for example, large scale images (up to 22GB per image) are accessed, which causes massive IO problems on our systems, due to very high bandwidth requirements and random, fine grand access patterns. Hierarchical storage architectures are a promising technology to allow faster access to frequently used data. However, the efficient use of staging layers is hard, since the faster layers usually have a lower capacity. We evaluate different methods of staging frequently used data in faster storage layers. Our staging techniques not only copy the data, but also perform transformations, which leads to better access patterns and reduces IO, which increases the total performance of our Deep Learning applications up to the factor of ten, compared to the original applications.