Publication
OFC 2017
Conference paper

High capacity VCSEL-based links

Abstract

• Commercial Cloud Servers and High Performance Computing (HPC) need higher data rate short distance links - 100% of Fiber Cables in Blue Gene Q Sequoia (20 PFlop system) < 23 m - 100% of Fiber Cables in largest Power 775 system < 28 m - Data Centers also have a lot of very short distance links • Directly modulated VCSELs and Multimode fiber will satisfy Server, HPC and some datacenter needs for several generations to come. • Multimode VCSEL-based technology is now and will likely be the cheapest and lowest power solution in this space for the foreseeable future. • High Speed VCSEL technology keeps improving - Tremendous progress in single VCSEL speed in just past 4 years • 50 Gb/s reached in 2012 (NRZ) • 100 Gb/s reached in 2015 (DMT, Duobinary-PAM-4) • 150 Gb/s reached in 2016 (Poly-binary) - Equalization has extended the NRZ data rate > 71 Gb/s - Continued improvements in VCSEL bandwidth and RIN could realize 100 Gb/s NRZ - NRZ is preferred, PAM4+FEC adds undesired latency. - Continued improvements in DACs/ADCs for > 150 Gb/s HOM • Optics needs to move to the First Level Package to realize full potential - If possible, sweep driver and receiver functions into the ASIC/Switch chip - SiGe first followed by CMOS later.

Date

31 May 2017

Publication

OFC 2017

Authors

Share