Exploiting approximate computing for deep learning accelerationChia-Yu ChenJungwook Choiet al.2018DATE 2018
POSTER: Design Space Exploration for Performance Optimization of Deep Neural Networks on Shared Memory AcceleratorsSwagath VenkataramaniJungwook Choiet al.2017PACT 2017
Scaledeep: A scalable compute architecture for learning and evaluating deep networksSwagath VenkataramaniAshish Ranjanet al.2017ISCA 2017
INVITED: Accelerator Design for Deep Learning Training: Extended Abstract: InvitedAnkur AgrawalChia-Yu Chenet al.2017DAC 2017
09 Jan 2023US11551054System-aware Selective Quantization For Performance Optimized Distributed Deep Learning
29 Aug 2022US11429524Optimized Hierarchical Scratchpads For Enhanced Artificial Intelligence Accelerator Core Utilization
MOMori OharaDeputy Director, IBM Research Tokyo, Distinguished Engineer, Chief SW Engineer for Hybrid Cloud on IBM HW