RTLExplain: A Structured Approach to RTL Code Summarization and Question Answering for Medium-to-Large Designs Using LLMsTing-hsun (Austin) ChiCharles Mackinet al.2025DAC 2025
Enhancing LLMs for HDL Code Optimization using Domain Knowledge InjectionChe-ming ChangPrashanth Vijayaraghavanet al.2025DAC 2025
Architectural Benchmarking of Compute-in-Memory SystemsPritish NarayananSidney Tsai2025VLSI Technology and Circuits 2025
NORA: Noise-Optimized Rescaling of LLMs on Analog Compute-in-Memory AcceleratorsYayue HouSidney Tsaiet al.2025DATE 2025
Efficient Scaling of Large Language Models with Mixture of Experts and 3D Analog In-Memory ComputingJulian BüchelA. Vasilopouloset al.2025Nat. Comput. Sci.
Multi-Task Neural Network Mapping onto Analog-Digital Heterogeneous AcceleratorsHadjer BenmezianeCorey Liam Lammieet al.2024NeurIPS 2024
Architecture and Programming of Analog In-Memory-Computing Accelerators for Deep Neural NetworksSidney TsaiPritish Narayananet al.2024IPDPS 2024
Emerging Nonvolatile Memories for Analog Neuromorphic ComputingAn ChenStefano Ambrogioet al.2024ECS Spring Meeting 2024
Design of Analog-AI Hardware Accelerators for Transformer-based Language Models (Invited)Geoffrey BurrSidney Tsaiet al.2023IEDM 2023
Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and InferenceManuel Le GalloCorey Liam Lammieet al.2023APL Mach. Learn.