We consider an exciting new computing paradigm called cloud-backed swarm cognition. It refers to an envisioned future where (mobile) edge devices work together collaboratively to provide intelligent services. The cloud provides a resilient back-up cover and supervision for such services. A specific example domain is that of smart, connected autonomous vehicles (e.g., cars, buses, trucks, or drones). In this computational paradigm, the cloud serves as the (relatively) stable repository of reference knowledge that is less frequently accessed than in non-swarm computation. The 'vehicle swarm on the other hand, serves as a (relatively) dynamic cache of knowledge at the 'edge.' Leveraging the collaborative swarm mode reduces real-time deadline pressures at the individual node level, while improving edge resilience through redundancy. Overall, this leads to smarter vehicles through: (a) improved edge inferential accuracy and (b) improved system-level energy efficiency. In this paper, we consider the system architecture represented by the cloud-backed swarm cognition apparatus. We provide a visionary perspective of the fundamental trade-offs that one must model and interpret in tuning the parameters of this new architecture in a scenario where a swarm of connected cars conducts real-time traffic sign recognition.