23 Mar 2021
News
5 minute read

COVID-19 HPC Consortium one year on: Getting ready for the next crisis

Today, we honor the one-year anniversary of the formation of the COVID-19 High Performance Computing Consortium, a ground-breaking public-private initiative that gives researchers around the globe unprecedented access to the world’s most powerful computing resources.

Today, we honor the one-year anniversary of the formation of the COVID-19 High Performance Computing Consortium, a ground-breaking public-private initiative that gives researchers around the globe unprecedented access to the world’s most powerful computing resources.

It’s now been a year. In March 2020, the world had only begun to grasp the reality of the COVID-19 pandemic. But one thing quickly became clear: industry, academia and government suddenly understood that the virus that triggers the disease would stick around for a long time.

And that we had to fight it.

Today, we honor the one-year anniversary of the formation of the COVID-19 High Performance Computing Consortium , a groundbreaking public-private initiative that gives researchers around the globe unprecedented access to the world’s most powerful computing resources. The Consortium brings together 43 organizations from around the world, uniting academia, government, and technology companies — many of whom are typically rivals — to tackle COVID-19, sharing their knowledge in ways not possible if they were acting alone.

This past year’s results have been impressive, and science isn’t about to stop.

For example, we now understand how the virus interacts with human receptor proteins, its mutations, and their impact on the creation of vaccines and therapeutics. We found new molecules and proteins able to fight the coronavirus. And we’ve obtained high resolution simulations of aerosol transport in indoor environments. And so much more.

A remarkably fast launch

It all started with a phone call. Early March last year, Dario Gil, the Director of IBM Research, received some personal news from his family in Spain — his cousin, a doctor, had tested positive for COVID-19. Back then, cases in Spain were already surging, weeks before the wave of the pandemic hit America hard.

After an internal discussion, an idea was born — to pool US supercomputing resources so that researchers worldwide could use them to fight the new deadly disease. Together with several IBM Research colleagues, Gil contacted the Department of Energy and the White House Office of Science and Technology Policy, and the idea started to become reality .

The partnership got underway incredibly quickly, spearheaded by the White House, the Department of Energy, the National Science Foundation, and IBM. It took just a few days to get the main partners to join, complete the paperwork, set up the review and matching committees to deal with research proposals, and alert scientists around the globe to this new powerful computing resource.

Shortly after the launch, the first proposals started pouring in. A year later, we now have 98 active projects on several continents using computing resources through the cloud — all thanks to the Consortium, in three areas — Basic Science, Therapeutic Development and Patients. Thirty-two of these projects already have clinical or experimental transition plans.

Let’s zoom in on a few.

Thanks to the resources offered by the partnership, a team led by Harel Weinstein at Weill Cornell Medical College used the IBM-developed supercomputer AiMOS at Rensselaer Polytechnic Institute (RPI) to simulate molecular mechanisms of SARS-CoV-2 interactions with human cells’ membranes. This work has led to a better understanding of viral interaction with the human body.

Meanwhile, scientists at City University of New York, led by Mateusz Marianski, used the Lawrence Livermore National Laboratory (LLNL) and The Texas Advanced Computing Center (TACC) to study the role sugars play in preventing the virus from entering our cells. They analyzed the binding between carbohydrates on the surface of the coronavirus and receptors in a human cell – the ‘initial hand-shake’ between the two species. If successful, the research could help create new broad-spectrum drugs to fight most viruses, from the one that triggers COVID-19 to those that lead to flu, zika, yellow fever, dengue, HIV, hepatitis C, herpes, and others.

Another team, led by Jennifer Diaz at The Mount Sinai Hospital in New York, turned to IBM Summit to create drug-pair synergy predictions for COVID-19 protein-protein interaction (PPI) networks. The researchers identified 10 drug pairs that are predicted to target the COVID-19 PPI network — and in vitro validation is now underway.

It’s not just about basic science, either. A group of researchers led by John Davis at University of California San Diego used the San Diego Supercomputer Center to develop a multi-county COVID-19 transmission model. The model forecasts hospital and ICU beds necessary during such a pandemic and could be used anywhere in the world for future crises.

And Som Dutta at Utah State University and his colleagues relied on the TACC supercomputer to create precise simulations of how the virus’s droplets propagate in indoor environments when people simply breathe or speak. The research is now being used to study the virus’s so-called ‘residence time’ and deposition pattern in classrooms.

We still don’t fully understand this virus, but the progress researchers have made in such a short amount of time is astounding.

Looking into the future

Overall, though, the Consortium is proof that it’s possible to act fast to address a crisis, uniting private and public organizations, the government, academia and companies. And it means that it’s important to create a new, broader international organization to tackle all the other global challenges. A few months ago, IBM Research started developing such an initiative dubbed the National Strategic Computing Reserve. It would be composed of experts and resource providers from government, academia, non-profits, and industry, and would give access to critical computing capabilities and services in times of an urgent need. Computing is a core element of so many important capabilities, essential to properly respond to national crises to ensure public health and safety and to protect critical resources and infrastructure.

The Reserve’s idea is to enable the world to effectively mobilize computing assets in an emergency to accelerate the pace of discovery of whatever is necessary to combat the crisis. It could and should be used for future global threats — from hurricanes, earthquakes, and oil spills to pandemics and wildfires, and even rapid turnaround modeling when space missions are in jeopardy.

And we shouldn’t stop there.

The National Strategic Computing Reserve should be the first instantiation of a so-called Science Readiness Reserve – an even wider global organization. This new international body could and should help us develop a strategic, sustainable approach to apply the world’s computing and scientific capabilities to future global challenges and needs. Because the world hasn’t run out of problems to solve.

As we mark the one-year anniversary of the COVID-19 High Performance Computing Consortium, let’s remember that we can all join forces to fight global crises — effectively. We’ve done it with this pandemic. We can and should do it again.

Date

23 Mar 2021