One example of OOD in use, could be if a supply chain interruption causes a manufacturer to have to switch to a new supplier that makes a similar product. The parts supplied were subtlety different, something as minor as being a slightly different color. If a component had to be replaced in a substation, it might look a little different from the original. An AI model without OOD would flag the new part as defective and cause a costly error. OOD will reduce the prevalence of indicating these subtle differences as false positives.
“With Spot and with AI capabilities, it allows National Grid to inspect critical electrical and gas facilities quickly and thoroughly, while allowing staff to perform other critical duties. We’re able to be more efficient with our time,” said Dean Berlin, Lead Robotics Engineer at National Grid. “But more importantly, we’re able to conduct the work safely. The robot can enter dangerous, highly electrified areas where humans cannot go, unless we shut down a station. This lets us monitor areas routinely without costly shutdowns.”
As AI scales into new industries, ensuring the right model is being used for the right problem at the right time is critical for clients aiming to automate their operations. There can be drastic variations in the conditions at a single assembly line, from new defects, changing environments, or lighting. AI models need to be dynamic, constantly retraining with new data. When a single model is updated, organizations need to ensure that the updates are sent to every edge device on their system, otherwise defects will be missed, and false positives will occur. Dealing with a system like this requires a hub and spoke management model.
The hub, located either on premises or in the cloud, is where the models are aggregated, retrained, and redeployed to the right edge devices, each of which are individual spokes. The spokes can send information back to the hub as well. One spoke, such as a Boston Dynamics Spot, could detect a new anomaly, capture data, and send it to the hub. The data can then be labeled by an expert, and then used to retrain the model. The new model is then deployed to all identical devices in the ecosystem.
These edge devices don’t all have the same levels of compute power or memory capacity. We’re also developing techniques to automatically send a model to a device like a Spot, for example, when it’s in a certain location, and when the inference is done, remove the model from the machine to save space.
With all our research, we need to ensure that data can be trusted, is relevant to the problem at hand, keeps the model performant — and stays secure. Research is focusing on innovations to ensure data can be trusted. We’re also working on ways to monitor any unusual behavior on an edge device, such as sending or receiving data it shouldn't be.
Research’s advancements in AI and edge computing are bringing new possibilities to solving clients’ problems cost effectively, with low touch ease of use, and high impact to productivity and safety improvements.