AI Hardware Forum 2023
- Yorktown Heights, NY, USA
About
The fast adoption of Generative AI and Foundation Models is transforming how we compute at scale, and requires bold innovations in hardware and software infrastructure. Continuing to rely on conventional approaches is simply not sustainable and limits accessibility to the technology.
If you are interested in learning more about this invite-only event, please reach us at ibmaihw@us.ibm.com.
Why attend
We will come together to discuss what's next in foundation models and the challenges in designing AI hardware to support complex and multi-modal workloads. There will be presentations from industry leaders, a special panel session, interactive demos by IBM experts, and a poster session showcasing research from leading universities.
Speakers
Mukesh Khare
Steven Huels
Matt Baker
Raja Swaminathan
Jeffery Burns
Soumith Chintala
Priya Nagpurkar
Yasumitsu Orii
Agenda
Registration open at 9:00am. Light breakfast and coffee will be provided.
- Welcome remarks – Mukesh Khare (IBM)
- Realizing Value out of AI/ML – Steven Huels (Red Hat)
- Infrastructure and solutions considerations in a dynamic AI landscape – Matt Baker (DELL)
- Heterogeneous Integration enabled by Advanced packaging empowering next generation AI architectures – Raja Swaminathan (AMD)
MKMukesh KhareVice President, Hybrid CloudIBM ResearchSHSteven HuelsGeneral Manager, Artificial IntelligenceRed HatMBMatt BakerSenior Vice President, Artificial Intelligence StrategyDell TechnologiesRSRaja SwaminathanCorporate Vice President, PackagingAMDA sit-down lunch will be provided for all guests.
Announcements:
- Jeff Burns (IBM)
Plenary Talks:
- Enabling a Flexible AI Infrastructure with PyTorch – Soumith Chintala (Meta) and Priya Nagpurkar (IBM)
- Semiconductor Technology Platforms for AI Scaling – Yasumitsu Orii (Rapidus) and Hemanth Jagannathan (IBM)
- Analog In-Memory Computing with Phase Change and Flash Memories – KC Wang (Macronix) and Vijay Narayanan (IBM)
JBJeffery BurnsDirector, AI ComputeIBM ResearchSCSoumith ChintalaVice President, AI ResearchMetaPNPriya NagpurkarVice President, Hybrid Cloud Platform and Developer Productivity, IBM ResearchIBM ResearchYOYasumitsu OriiSenior Managing Executive Officer, 3D Assembly DivisionRapidusHJHemanth JagannathanDistinguished Engineer, Chiplet and Advanced Packaging Technology & Quantum 300mm Scale-outIBM ResearchKWKC WangChief ScientistMacronixVNVijay NarayananIBM Fellow and Senior ManagerIBM ResearchAI, semiconductors, and the role of government:
- Host:
- Ashish Nadkarni (IDC)
- Panelists:
- Shadi Shahedipour-Sandvik (New York State University System; SUNY)
- Kazumi Nishikawa (Japan Ministry of Economy, Trade, and Industry; METI)
- Carlo Reita (France Energy Agency Technology Research Institute; CEA-Leti)
- Manuel Xavier Lugo (US Department of Defense; DoD)
- Albert Heuberger (Germany Fraunhofer Institute for Integrated Circuits, IIS)
ANAshish NadkarniGroup Vice President and General Manager, Infrastructure Systems, Platforms and Technologies and BuyerView ResearchIDCSSShadi Shahedipour-SandvikSenior Vice President for ResearchSUNYKNKazumi NishikawaPrincipal Director, Commerce and Information Policy BureauMETI – Japan Ministry Economy, Trade, and IndustryCRCarlo ReitaDirector, Strategic Partnerships and PlanningCEA-LetiCUCapt. Manuel Xavier Lugo, USNHead of AI ProgramsOSD Chief Digital and Artificial Intelligence Office, DoDPHProf. Albert HeubergerExecutive DirectorFraunhofer Institute for Integrated Circuits- Host:
Drinks and light bites will be provided
IBM Live Demos:
- Efficient Inference on the IBM AIU Accelerator
- On-chip Analog In-Memory Compute
Academic Posters:
- Digital Architecture
- Columbia: iMCU: A Digital In-Memory Computing-based Microcontroller Unit for TinyML
- RPI: EVA: Efficient Vector Architecture for Dynamic Structured Sparsity in Transformer Processing
- Algorithm Optimization
- RPI: Coarser-grained Structured Pruning: Pruning of Experts in the Mixture-of-Experts (MoE) based DNN Architectures with Theoretical Performance Guarantee
- MIT: PockEngine: Sparse and Efficient Fine-tuning in a Pocket
- MIT: SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
- MIT: Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
- MIT: On-Device Training Under 256KB Memory
- MIT: MCUNet: Tiny Deep Learning on IoT Devices
- Packaging
- SUNY Binghamtom: Investigate electromigration (EM) behavior of micro bump
- SUNY Binghamtom: A Microstructural Investigation of Sub-10 um Pitch Copper Contact Structures and Bonded Copper in Hybrid Bonding
- In-Memory Compute
- GA Tech: Tunable Non-volatile Gate-to-Source/Drain Capacitance of FeFET for Capacitive Synapse
- MIT: Protonic non-volatile programmable resistors for analog neural networks
- MIT: Simulation of analog neural networks based on protonic non-volatile programmable resistors using IBM AI HW Kit
- MIT: Thickness limits of electrolyte and channel layers in protonic ECRAMs
- U Albany: Influence of processing conditions on tantalum oxide resistive random access memory (ReRAM) performance
- U Albany: Experimental analysis of multilevel programming of 1T1R ReRAM crossbar arrays based in-memory computing using a microcontroller-based hardware platform
- U Albany: Analog NVM Synapse for Hardware-Aware Neural Network Training Optimization on Hybrid 65nm CMOS / TaOx ReRAM Devices
- U Albany: Reduced switching energy in bilayer Sb/AlSb PCM heterostructure cell
- U Albany: Crystallization template for Sb-rich (A7) PCM phase
- RPI: Complex Synapse: Design and Fabrication