Research
10 minute read

AI boosts the discovery of metamaterials vital for next-gen gadgets

Example of a metasurface without light direction.

Remember H. G. Wells’ The Invisible Man? The protagonist, a scientist researching optics, managed to make a body’s refractive index the same as that of air, meaning it neither reflected or absorbed light – and made himself invisible.

That was sci-fi in 1897 — but research into artificial materials with optical properties not found in naturally-occurring materials started around the same time. Dubbed metamaterials, they were finally engineered decades later.

Applications of such optical structures include ultra-compact sensors, imaging and spectroscopy devices used in mobile phone cameras and medical applications.

To take the design of such critical materials a step further, our team of IBM and MIT researchers at the MIT-IBM Watson AI lab has developed a so-called deep neural net ‘surrogate.’ A mouthful perhaps, it’s a neural network model that mimics a simulation of an expensive partial differential equation that describes the properties of a material we want to create.

"LEFT: Figure a) Active learning of deep surrogates with uncertainty quantification. RIGHT: Figure b) Data efficiency of active learning."
LEFT: Figure 1a) Active learning of deep surrogates with uncertainty quantification. RIGHT: Figure 1b) Data efficiency of active learning.

In other words, we have used AI to help develop the optimal material structures faster than ever before. In a paper recently published in Nature Computational Materials,1 we describe how AI helps to find the ideal configuration of nanopatterned materials for the best optical design, using active learning of deep surrogates to enable data efficient training. In two other works presented at last year’s Knowledge Discovery and Data Mining (KDD) event and the International Conference on Learning Representations (ICLR), we also detail an approach to the best thermoelectric nanoporous structure design and the best way to create two-dimensional semiconductor material.

Bending light the “wrong” way

Metamaterials can have weird optical properties, such as bending light the 'wrong' way because of their negative refractive index. They typically derive their properties from their carefully designed internal microstructure rather than the chemical composition. In 2006, scientists even managed to create an invisibility cloak of sorts — by making an invisible 'hole' in space, inside which it was possible to “hide” objects by guiding light around the hole, bending it.

To identify the right metamaterial structure that has the desired properties, scientists search the vast structural landscape by solving a large-scale partial differential equation (PDE) multiple times — a computationally-intensive task that quickly becomes too much even for a supercomputer to handle.

Another challenge is optimization. It’s crucial to optimize the structure of a metamaterial so it satisfies specific desired properties. While they know what property they would like the material to have, they have no idea what the material’s structure should be to have that property. The structure-property relationship is complex — materials can have many different structures.

That’s why researchers call such properties “black-box properties.” It’s not possible to create each and every structure and test their properties to identify the right candidate. So, again, scientists rely on run computer simulations and computationally expensive solvers of partial differential equations, the solutions of which determine the property of the material configuration.

To deal with both challenges, we have developed an AI-accelerated nanomaterial structure discovery process that uses the so-called deep neural net “surrogate” models that mimic an expensive PDE. These surrogates make the process much more efficient — and cheaper.

To explain what a surrogate model is, consider an example.

For instance, say we want to design a material that does not dissipate heat. For that, we first need to analyze how heat would propagate through such material. The traditional approach would be testing different materials in the lab to hopefully eventually find one with the necessary heat profile – an expensive and time-consuming process.

Another approach, though, is to look at the heat equation that models the propagation of heat through a material. Such an equation — a PDE — is fairly simple. But for other properties, PDEs can be extremely complicated and expensive to solve.

Tackling Maxwell equation with a surrogate

That’s where AI can help — by mimicking the PDE solver.

We’ve developed an AI method that works just as well as the existing expensive PDE solvers but in an intelligent and much cheaper way, enabling computationally efficient optimization. To show that it works, we got our AI to learn a surrogate of the Maxwell equation that gives the desired physical properties of an optical metasurface.

When such metasurfaces are large, the equation is very expensive to solve. But by using a neural network that mimics the PDE in predicting the physical property, we were able to obtain an efficient surrogate that is fast to compute. Since a neural network is also differentiable, we could also easily get its gradient, making it simple to optimize for a desired property.

To fit a surrogate, one typically has to run simulations on an expensive PDE solver in order to construct a training set. A deep neural network learning to mimic the PDE solver needs a large amount of training data, meaning a prohibitive computational cost for running the simulations.

To deal with the issue, we have decided to model the uncertainty of the surrogates. A surrogate model is never exact, and by getting it to determine its own uncertainty about predicting the desired property, it’s possible to know which areas we need the model to focus on to have a better prediction. This technique leads (Figure 1 b) to an accurate surrogate that is also data-efficient as it leads to reducing the sample complexity of the training by 12-fold.

We have recently shown in a ICLR 2021 Deep Learning For Simulation workshop paper2 that the same methodology extends to solid mechanics.

But that’s not all.

In a different collaboration between IBM and MIT, the team has developed another method for estimating a surrogate deep neural network that mimics physical simulation given by a black box function. We describe these findings in a paper presented at the ICLR 2020 DeepDiffEq Workshop.3

In this work, instead of learning a surrogate that fits only the property prediction task, the work focused on having a good estimate of the gradient of the black box function. Having this surrogate gradient then enables us to optimize the desired property using what’s known as Langevin dynamics – a technique for black-box sampling.

We applied this method to successfully produce new nano-configuration designs satisfying the thermal properties of the Boltzmann Transport Equation as well as mechanical constraints describing the stability of the nanomaterial. The Boltzmann transport equation is used for energy-harvesting technologies, such as thermoelectrics and photovoltaics.

Black-box optimization with ‘expert advice’

Then there are two-dimensional materials that have also recently been in the spotlight, such as transition metal dichalcogenides (TMDs). These 2D materials have attractive optical, electronic and catalytic properties, and are a great candidate for tiny, nanoscale electronic devices. One prominent material of the TMDs family is Molybdenum disulfide or MoS2, which has a direct band-gap of ∼1.8 eV and high mechanical flexibility, and is an excellent candidate for developing metamaterials with enhanced properties.

But to properly design such materials, it’s crucial to correctly know the structural point defects that affect the electric and magnetic properties of single-layer MoS2 structures. Typically, researchers obtain the distribution of these point defects and the corresponding energies by running an expensive physical simulation.

AI can help us bypass them – using an efficient surrogate machine learning model that predicts these properties.

In a recent KDD paper4, a team of IBM and Argonne National Laboratory scientists describes a method called COMEX – Combinatorial Optimization with Monomial Experts. This method optimizes an unknown black box function by searching a function that models the space of defect configurations, while simultaneously learning it using a surrogate model.

The surrogate model is based on expressing the function as a multilinear polynomial, its natural Fourier expansion. Given the surrogate model, the authors apply an approach called Simulated Annealing – a specific, widely used method of sampling. As can be seen in the figure, COMEX outperforms competitive methods (Random Search and Simulated Annealing) in identifying the optimal defect structure in a sample grid. The method is also computationally very efficient compared to existing methods due to the representation and the learning method used.

We are not done yet.

We are now developing optimization algorithms to go beyond the boolean representation and working on learning an approximation of the precise input geometry for faster estimation of the optimal structure. Hopefully in the future, we’ll be able to engineer a material with any desired property, be it optical, catalytic, electronic, biological or a combination of those – even a real Harry Potter-like invisibility cloak.

References

  1. Pestourie, R., Mroueh, Y., Nguyen, T.V. et al. Active learning of deep surrogates for PDEs: application to metasurface design. npj Comput Mater 6, 164 (2020).

  2. Pestourie, R., Chomette, G., Mroueh, Y., et al. Active Learning of Deep Surrogates for PDEs. ICLR 2021 SimDL Workshop (2021).

  3. Nguyen, T.V., Mroueh, Y., Hoffman, S., et al. Nano-material Configuration Design with Deep Surrogate Langevin Dynamics. ICLR 2020 DeepDiffEq Workshop (2020).

  4. Dadkhahi, H., Shanmugam, K., Rios, J., et al. Combinatorial Black-Box Optimization with Expert Advice). KDD 2020 Virtual Conference (2020).