It’s a sentiment that was echoed in the keynote of Yann LeCun, Meta’s chief AI scientist, who spoke about where he sees the field of AI heading. Much of the noise these days is around the leaps and bounds that generative models have made in the last few years, but they still lack the ability to plan, reason, and form a hierarchal model for carrying out tasks — a single human can learn to drive in a few months, but after several decades of efforts, we still haven’t been able to make reliable autonomous vehicles. To LeCun, the future of AI will require joint-embedding architectures, rather than the transformer architectures that dominate LLMs today. This is where he is focusing his research, which he said he believes could be as foundational as the internet.
But when models like these do get built, it’s absolutely necessary that they be open source. These systems, LeCun said, need to be diverse, understanding all the world’s languages, cultures, and value systems to truly work for everyone. You’re not going to get that from just a single company based in the US, he argued. That was part of the impetus for Meta and IBM coming together with many other organizations to create the AI Alliance, he said. Launched in 2023, the group comprises companies, research facilities, and institutions that advocate for safe, responsible AI — rooted in open innovation.
This was something several of the speakers on the panel covering the path to quantum-centric supercomputing also brought up. It’s rare that breakthroughs happen in a vacuum. In the case of quantum computing, IBM has a roadmap for the next decade of hardware and software advances, with the aim of building systems that can solve problems that are intractable on classical computers. But without research partners, there’s no way to externally validate the work. And without a robust developer community, we won’t have applications that will improve how we live and work. This is why IBM developed Qiskit, an open-source software development kit for developing quantum software.
It’s also why Q-CTRL, whose founder Michael Biercuk was onstage at the event, is aiming to make something like a platform for parallel computing for quantum. Just like in classical computing, you need expertise across the hardware, infrastructure, software, and the domains you’re looking to solve problems for, to be able to build a robust and thriving ecosystem.
Throughout the history of computing, we have been using computers to process increasingly complex and diverse types of information. Everything is made up of signals that we can deconstruct, digitally abstract, analyze, and use — given enough time and compute power. IBM Research Director Darío Gil discussed in his closing remarks how we have moved through the eras of computing, from the PC, to the internet, and now AI — and that by the 2030s, we will likely have processors than can house 1 trillion transistors. And the sorts of signals we could process with that power were on display at the event.
AI is an obvious choice, given how much processing power we know today’s models require to run at scale. One panel on how AI can help scientists showed that there’s myriad types of data in the world of biology alone that are entirely unlike LLMs, but equally ripe to explore. But what we can do with a combination of signal processing, AI, and neuroscience could help us unlock how the brain functions.
A discussion between neurobiologist Rafa Yuste and neurosurgeon Eddie Chang highlighted how much progress has been made in areas like understanding how the brain processes speech. The two showed off how with electrodes implanted in a patient’s brain and a recurrent neural network-based AI model, they have created digital avatars for patients with locked-in syndrome that enabled them to communicate with others at around 80 words per minute — a massive increase over systems they had designed just a few years earlier, which could only handle a few words per minute.
Another panel, exploring the future of gene-editing techniques like CRISPR, focused on similar data-processing needs that are now enabling treatments with gene mapping that weren’t even possible a generation ago. A group of scientists, including Fyodor Urnov and Kiran Musunuru who spoke at the event, are looking to build a platform where patients with ailments, ranging from diseases like sickle-cell anemia to heart disease, could have their genome mapped, the offending genes edited in a few days with a CRISPR-powered transfusion. In 2003, the first human genome in the world was mapped, taking 13 years and billions of dollars. Now anyone can get their genome sequenced in days for about $400. With the coming computing power, Urnov said he believes in about a decade’s time, “genetic vaccines” will be as easy to make as a custom pizza at the pizza parlor. “We’re building a platform for this: It’s a bit like making pizza — you just change the toppings.”
Another theme across the day was the future of humanity itself, from how future quantum computers and AI systems will help uncover new materials, drugs, and ways of working, all the way to how we’ll think and feel in the future. To some, there are fears of superintelligent AI systems will lead to the downfall of humanity. LeCun argued that some AI systems today have already surpassed humans’ capability in certain domains, and yet we still seem to be doing fine. He also argued that human intelligence is “highly multidimensional” and unlikely to be completely surpassed by any one system.
On the headier end of the spectrum, discussions also dove into technologies that could fundamentally alter the fate of humanity. With advances in neurotechnology, it may soon be possible to stimulate the brain of those suffering from chronic depression to switch it off like flipping a switch. And CRISPR holds the potential to practically eradicate the world’s biggest killer in heart disease by switching off genes that produce too much cholesterol. Patients with diseases linked to unique gene mutations could be cured in months, and other hereditary conditions could be cut out from people decades before they become afflictions in a process that would be about as complicated for a patient as getting a blood transfusion.
All of these breakthroughs will not be possible without continued investment in research, and more events like this bringing minds from across the world together. It will take “nation-scale investments that will require massive collaboration,” according to Gil, while reminding the audience that in AI, open innovation will outperform what is going to be done with the proprietary models. It’s an exciting, if massive, prospect for the future of humanity. One that will require a lot of math, and even more collaboration.