About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
OFC 2017
Conference paper
High capacity VCSEL-based links
Abstract
• Commercial Cloud Servers and High Performance Computing (HPC) need higher data rate short distance links - 100% of Fiber Cables in Blue Gene Q Sequoia (20 PFlop system) < 23 m - 100% of Fiber Cables in largest Power 775 system < 28 m - Data Centers also have a lot of very short distance links • Directly modulated VCSELs and Multimode fiber will satisfy Server, HPC and some datacenter needs for several generations to come. • Multimode VCSEL-based technology is now and will likely be the cheapest and lowest power solution in this space for the foreseeable future. • High Speed VCSEL technology keeps improving - Tremendous progress in single VCSEL speed in just past 4 years • 50 Gb/s reached in 2012 (NRZ) • 100 Gb/s reached in 2015 (DMT, Duobinary-PAM-4) • 150 Gb/s reached in 2016 (Poly-binary) - Equalization has extended the NRZ data rate > 71 Gb/s - Continued improvements in VCSEL bandwidth and RIN could realize 100 Gb/s NRZ - NRZ is preferred, PAM4+FEC adds undesired latency. - Continued improvements in DACs/ADCs for > 150 Gb/s HOM • Optics needs to move to the First Level Package to realize full potential - If possible, sweep driver and receiver functions into the ASIC/Switch chip - SiGe first followed by CMOS later.