We investigate the advantages of using co-packaged optics in next-generation data center and AI supercomputer networks. The increased escape bandwidth offered by co-packaged optics provides multiple possibilities for building 50T switches and beyond, expanding the opportunities in both the data center and supercomputing domains. This provides network architects with the opportunity to expand their design space and develop simplified networks with enhanced network locality properties. Co-packaging at the switch and server points enables networks with double capacity while reducing the switch count by 64% compared to state-of-the-art systems. We evaluate these concepts through discrete-event simulations using all-to-all and all-reduce traffic patterns that simulate collective communications commonly found in network-bound applications. Initially, we investigate the all-to-all overhead involved in distributing the virtual machines of the applications across multiple leaf switches and compare it to the scenario in which all VMs are placed under a single switch. Subsequently, we evaluate the performance of an AI supercomputing cluster by simulating both patterns for different message sizes, while also varying the number of participating nodes. The results suggest that networks with improved locality properties become increasingly important as the network stack operates at higher speeds; for a stack latency of 1.25 µs, placing the applications under multiple switches can result in up to 68% higher completion times than placing them under a single switch. For AI supercomputers, significant improvements are observed in the mean server throughput, reaching more than 90% for configurations involving 256 nodes and message sizes of at least 128 KiB.