Algorithms and Architectures for Deep Learning with Applications
December 17 (9am-10am)
Prof. Tokunbo Ogunfunmi (Santa Clara University)
Abstract Deep Learning and Machine Learning are becoming increasingly indispensable tools and methods for learning from data in order to make decisions and interact with our environment. Convolutional Neural Networks (CNNs) are key to several popular applications of deep learning such as image perception, speech recognition, etc. The increasing usage of CNNs in various applications on mobile devices and data centers have led the researchers to explore application specific hardware accelerators for CNNs which typically consist of a number of convolution, activation and pooling layers of which convolution layers is most intensive computationally. Though popular for accelerating CNN training and inference, GPUs are not suitable for embedded applications because they are not energy efficient. ASIC and FPGA accelerators have the potential to run CNNs that are optimized for energy and performance. We present an overview of the key technology areas and research challenges. Key concepts in machine learning theory; Classification vs. Regression models, Supervised and Unsupervised Learning, Linear and Logistic Regression; Regularization; Neural Networks; Machine Learning System Design. Key concepts of Deep Learning. Examples of applications e.g. Handwriting recognition, Recommender Systems, etc. Then we discuss in detail two new methods for two-dimensional (2-D) convolution that offer considerable reduction in power, computational complexity and efficiency and offer a considerably better architecture for hardware implementation of CNNs. The first method computes convolution results using row-wise inputs as opposed to traditional tile-based processing giving considerably reduced latency. The second method Single Partial Product 2-D (SPP2D) Convolution prevents recalculation of partial weights and reduces input reuse. Hardware implementation results and comparisons are presented..
Biography Tokunbo Ogunfunmi received the B.S. (first class honors) degree from the Obafemi Awolowo University (OAU) (formerly known as the University of Ife), Ile-Ife, Nigeria, and the M.S. and Ph.D. degrees from Stanford University, Stanford, California, all in Electrical Engineering. He is currently a Professor of Electrical and Computer Engineering and Director of the Signal Processing Research Laboratory at Santa Clara University (SCU), Santa Clara, California. From 2010-2014, he served as the Associate Dean for Research and Faculty Development for the SCU School of Engineering. At SCU, he teaches a variety of courses in circuits, systems, signal processing related areas including a new course on autonomous vehicle systems. His current research interests include machine learning, deep learning, speech and multimedia (audio, video) compression, digital and adaptive signal processing and applications and nonlinear signal processing. He has published four books and 200+ refereed journal and conference papers in these areas. Dr. Ogunfunmi served the IEEE as a Distinguished Lecturer from 2013-2014 for the Circuits and Systems (CAS) Society. He has also served in Editorial Boards of IEEE Transactions on CAS-I, CAS-II, IEEE Signal Processing Letters. He currently serves on the Editorial Boards of the journal Circuits, Systems and Signal Processing and the IEEE Transactions on Signal Processing.
Trust Computing with Learning-based Auction for Distributed Systems
December 18 (9am-10am)
Prof. Joongheon Kim (Korea University)
Abstract In modern distributed computing systems, econometric theories such as auction and game theory are widely used for resource management. Among them, auction theory is actively used for trust computing in distributed computing under certainty. In conventional auctions, first price auction (FPA) algorithms are revenue-optimal whereas untruthful. On the other hand, second price auction (SPA) algorithms are truthful whereas revenue is not optimal. Therefore, new approaches which are truthful and revenue-optimal, that is called optimal auction. In this talk, optimal auction is designed based on deep learning frameworks and the corresponding applications are introduced in terms of blockchain and unmanned aerial vehicle (UAV) networks.
Biography Prof. Joongheon Kim has been with the School of Electrical Engineering, Korea University, Seoul, Korea, since September 2019. He received the B.S. and M.S. degrees in computer science and engineering from Korea University, Seoul, Korea, in 2004 and 2006, respectively; and the Ph.D. degree in computer science from the University of Southern California (USC), Los Angeles, CA, USA, in 2014. Before joining Korea University, he was with LG Electronics (Seoul, Korea, 2006–2009), InterDigital (San Diego, CA, USA, 2012), Intel Corporation (Santa Clara in Silicon Valley, CA, USA, 2013–2016), and Chung-Ang University (Seoul, Korea, 2016–2019). He is a senior member of the IEEE, and serves as an associate editor for IEEE Transactions on Vehicular Technology. He internationally published more than 80 journals, 110 conference papers, and 6 book chapters. He also hols more than 50 patents, majorly for 60 GHz millimeter-wave IEEE 802.11ad and IEEE 802.11ay standardization. He was a recipient of Annenberg Graduate Fellowship with his Ph.D. admission from USC (2009), Intel Corporation Next Generation and Standards (NGS) Division Recognition Award (2015), Haedong Young Scholar Award by KICS (2018), IEEE Vehicular Technology Society (VTS) Seoul Chapter Award (2019), Outstanding Contribution Award by KICS (2019), Gold Paper Award from IEEE Seoul Section Student Paper Contest (2019), and IEEE Systems Journal Best Paper Award (2020).