Minimizing weighted ℓp-norm of flow-time in the rejection model
- FSTTCS 2015
I am currently working in the Learning and Reasoning group at IBM Research -India. In particular, I am looking at efficient methods of scaling of models for Neurosymbolic AI. The work involves speeding up of inference and training of Logical Neural Networks using data and model parallelism. Prior to this, I contributed to compression of deep neural models from different domains (like images, videos and text) so as to reduce the inference time and memory footprint; this enabled complex models to be run efficiently on low resource systems like mobile/edge devices, or in shared enviroments like cloud.
I had also worked in the High Performance Computing group. The work mainly involved optimization and parallelization of different scientific applications on massively parallel architectures. Some of the projects on which I contributed include designing kernels for applications for the Exascale architecture, optimization of Graph500 and HPCC benchmarks (in particular RandomAccess) on Blue Gene/Q, parallelization of financial engineering applications and computational nanotechnology applications on clusters and distributed systems.
I am a Ph.D. in Computer Science and Engineering from the Indian Institute of Technology, Delhi (IITD). My supervisors were Prof. Naveen Garg and Prof. Amit Kumar, and my thesis was in approximation algorithms for job scheduling problems. Prior to that, I completed my B.E. from Jadavpur University, Kolkata and my MTech. from Indian Institute of Technology, Kanpur (IITK). My master's thesis was in the area of formal languages.