当前位置: 首页 | 学术交流 | 学术交流 | 正文

香港中文大学余备教授学术报告

信息来源: 暂无   发布日期: 2018-08-06  浏览次数:

报告题目:Accelerating  Deep Convolutional Networks

报告人:余备,教授,香港中文大学  

报告时间:2018年8月8号,9:00am-11:00am

报告地点:数计学院3号楼104室

Abstract:Deep neural networks (DNNs) have achieved  significant success in a variety of real world applications. However, tons of  parameters in the networks restrict the efficiency of neural networks due to the  large model size and the intensive computation. To address this issue, various  compression and acceleration techniques have been investigated. In this talk I  will introduce state-of-the-art techniques in DNN accelerating techniques from  the following three perspectives: 1) how we can accelerate accurate DNN  inference; 2) how we can accelerate inaccurate DNN inference; 3) how we can  accelerate DNN design space exploration. In addition, I will also discuss some  computer science & engineering skills that can contribute to a successful  research career.

Speaker Bio: Prof. Bei Yu received his Ph.D. degree  from the Department of Electrical and Computer Engineering, University of Texas  at Austin in 2014. He is currently in the Department of Computer Science and  Engineering, The Chinese University of Hong Kong. He has served in the editorial  boards of Integration, the VLSI Journal and IET Cyber-Physical Systems: Theory  & Applications. He has received four Best Paper Awards at ISPD 2017, SPIE  Advanced Lithography Conference 2016, ICCAD 2013, and ASPDAC 2012, three other  Best Paper Award Nominations at DAC 2014, ASPDAC 2013, ICCAD 2011, and four  ICCAD/ISPD contest awards.