【Abstract】
The size and complexity of recent deep learning models continue to increase exponentially, causing a serious amount of hardware overheads for training those models. Contrary to inference-only hardware, neural network training is very sensitive to computation errors; hence, training processors must support high-precision computation to avoid a large performance drop, severely limiting their processing efficiency. This talk will introduce a comprehensive design approach to arrive at an optimal training processor design. More specifically, the talk will discuss how we should make important design decisions for training processors in more depth, including i) hardware-friendly training algorithms, ii) optimal data formats, and iii) processor architecture for high precision and utilization.

【Biography】
Dongsuk Jeon received a B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 2009 and a Ph.D. degree in electrical engineering from the University of Michigan, Ann Arbor, MI, USA, in 2014. From 2014 to 2015, he was a Post-doctoral Associate with the Massachusetts Institute of Technology, Cambridge, MA, USA. He is currently an Associate Professor with the Graduate School of Convergence Science and Technology, Seoul National University. His current research interests include hardware-oriented machine learning algorithms, hardware accelerators, and low-power circuits.
Dr. Jeon was a recipient of the Samsung Scholarship for Graduate Studies in 2009, the Samsung Humantech Thesis Contest Gold Award in 2021, and the Best Design Award at International Symposium on Low Power Electronics and Design (ISLPED) in 2021. He has served for the Technical Program Committee of the ACM/IEEE Design Automation Conference and IEEE/ACM Asia and South Pacific Design Automation Conference. He is now serving as a Distinguished Lecturer of the IEEE Solid-State Circuits Society and an Associate Editor of the IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

Leave a Comment