3 minute read

IBM is using AI and drones to help spot cracks in airport runways

IBM Research is working with the Canton of Zurich, drone company Pixmap, and Dubendorf Air Base to detect defects in runways with AI before major problems occur.


IBM Research is working with the Canton of Zurich, drone company Pixmap, and Dubendorf Air Base to detect defects in runways with AI before major problems occur.

Detecting cracks in civil infrastructure, such as bridges, roads, and airport runways, isn't easy, but it’s crucial to prevent bigger problems and enhance maintenance routines.

To address the issue, our IBM Research team based in Zurich has developed an AI model that uses computer vision to detect tiny cracks in high-resolution images collected by drones. We’ve teamed up with the Canton of Zurich, the drone operations company Pixmap, and Dubendorf, the military airport on the outskirts of Zurich to inspect the airport’s runway surface. The project will test out several different AI models and how they perform, with results expected in the fall 2023.

Drones are ready for lift-off

To inspect the runways, a drone equipped with a camera will scan the runway, taking pictures. Our AI model will automatically apply what’s known as instance segmentation – a way to detect individual object instances and find their boundaries – to identify cracks on more than 10,000 images. This directs a civil engineering expert to relevant regions to judge the state of the runway. With the help of GPS and image stitching technology developed by our team, we can then create representations of the runway to help people quickly find and describe the location of the defects in the field. Information on crack lengths and widths is automatically populated and stored so that it can be searched for later.

This isn’t the first inspection project we’ve worked on. Since 2019, our team has been working with Sund and Baelt (S&B), inspecting Europe’s longest suspension bridge, the Storebælt. It’s the third-longest suspension bridge in the world, linking the eastern and western parts of Denmark. As part of that initiative, our AI has inspected more than 20 of the bridge’s pillars, differentiating between cracks, spalling, algae, and rust with 94% accuracy. One of the big challenges we’ve addressed recently is being able to precisely locate cracks less than a millimeter wide on a structure that’s hundreds of meters long, while also increasing the detection accuracy on the high-resolution images we capture. We’ve also created a tool to organize, process, and visualize the large amounts of data we collect with our imagery.

Last year, our team also tested the inspection technology at Frankfurt’s Fraport airport. The project’s goal was to inspect runways to detect anomalies and identify foreign object debris — obstacles on the runway, such as cans, bottles, waste, or small pieces of metal. We used our visualization and image stitching capabilities on the data, which also helped us to develop the backend technology we’re using in Dubendorf.

Turning to AI foundation models

While the current project in Switzerland also addresses runway inspection, but we’ve gone a step further with the tech. We expect the data to look different than everything we captured with the bridge project, given how different a runway and a bridge, and their cracks, can look. To ensure we achieve the best possible results, we needed annotated and task-specific data to help us build new models – this could be labor intensive and time consuming.

In an effort to address this challenge,  we have created a new class of AI model that we’re calling Foundation Models for Visual Inspection. These are deep learning models that are pre-trained on a large set of non-annotated, domain-specific data that can be fine-tuned on a smaller amount of labeled data, with fewer task-specific labels. We aim to demonstrate how the model can use more than 100,000 domain-relevant, unannotated images to provide a better inspection result. Typically, foundation models are trained so that the pre-training does not require any labels.

For our case, it means that the foundation models learn a general representation of concrete surfaces and runways. The AI can then search for cracks after the foundation model has been fine-tuned to the specific setting of detection cracks on that particular bridge or runway.

This approach works better with less annotated data than using standard deep learning approaches that require end-to-end training on client data. The underlying model could potentially be used to inspect defects on any large surfaces, such as tunnels, street pavements, or dams. Future updates to the technology will focus on improving the overall operation speed when working with poor quality images, given that you can’t always inspect surfaces on a beautifully clear day. We’ll also continue to focus on how we can scale the models to compute faster on limited resources so we can provide results faster and cost-effectively.