Spiking Neural Networks (SNNs), which represent information as sequences of spikes, are gaining interest due to the emergence of low-power hardware platforms such as IBM TrueNorth and Intel Loihi, and their intrinsic ability to process temporal streams of data (e.g., outputs from event-based cameras). A spike produced by a neuron in an SNN is an event that triggers updates to the membrane potentials of each of the fanout neurons based on the weight associated with the synaptic connection, possibly resulting in other spikes being generated. The time and energy consumption in SNN implementations are dominated by accesses to the synaptic weights from memory and communication of spikes through the on-chip network. To improve the energy-efficiency of SNNs, we therefore propose Dynamic Spike Bundling (DSB), wherein an event to fanout neurons is not generated for every spike; instead, spikes produced by a neuron that occur close in time are dynamically bundled, with a single event being generated for the entire spike bundle. This reduces memory accesses as the synaptic weight can be fetched just once and reused across all spikes in the bundle. The communication traffic is also reduced as fewer messages are communicated between neurons.To evaluate DSB, we develop B-SNNAP, an event-driven SNN accelerator with hardware support for dynamically bundling spikes with minimal overheads. Across 7 image recognition benchmarks including CIFAR100 and ImageNet datasets, DSB achieves 1.15×-3.8× reduction in energy for <0.1% loss in accuracy, and upto 5.1× savings when <1% accuracy loss is tolerable.