When Polyhedral Optimizations Meet Deep Learning Kernels

Vaidya, Hrishikesh and Badrinaaraayanan, Akilesh and Patwardhan, Abhishek A and Upadrasta, Ramakrishna (2019) When Polyhedral Optimizations Meet Deep Learning Kernels. In: IEEE HiPC.

[img]
Preview
Text
HiPC 2017.pdf

Download (340kB) | Preview

Abstract

Deep Neural Networks (DNN) are well understood to be one of the largest consumers of HPC resources and efficiently running their training and inference phases on modern heterogeneous architectures (and accelerators) poses an important challenge for the compilation community. Currently, DNNs are actively being studied by the automatic parallelization and polyhedral compilation communities for the same purpose. In this (initial) paper, we study the kernels of four varieties of DNN layers with the goal of applying automatic parallelization techniques for latest architectures. We show the affine (Polyhedral) nature of these kernels thereby showing that they are amenable to well known polyhedral compilation techniques. For benchmarking purposes, we implemented forward and backward kernels for four varieties of layers namely convo-lutional, pooling, recurrent and long short term memory in PolyBench/C, A well known polyhedral benchmarking suite. We also evaluated our kernels on the state-of-art Pluto polyhedral compiler in order to highlight the speedups obtained by automatic loop transformations.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Item Type: Conference or Workshop Item (Paper)
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: Team Library
Date Deposited: 16 May 2019 10:52
Last Modified: 28 May 2019 07:43
URI: http://raiithold.iith.ac.in/id/eprint/5200
Publisher URL:
Related URLs:

    Actions (login required)

    View Item View Item
    Statistics for RAIITH ePrint 5200 Statistics for this ePrint Item