Lab that Learns
Empowering people with the technology they need to excel at their craft is essential. Capturing and documenting the intricate details of manual and digital procedures is key for many industries, but new ways of generating transcripts of complex processes are needed to unlock the power of AI and learn from all process data. Today, major technology shifts are accelerating the pace toward industries of the future. Large AI models called foundation models are introducing a new paradigm in the way work gets done. Workflows consisting of digital and physical activities are captured from multi-modal data streams including video, audio, text, and bytes. IBM Research is bringing this technology to life in the lab, to fundamentally rethink how we capture, organize, and learn from all the data produced in complex workflows in research and innovation. Moving from labs that forget to labs that learn.
By leveraging multi-modal foundation models, researchers can generate end-to-end logs of their work, without the overhead of manually entering information into a system of record – paperless, handsfree, and automatic. Hybrid multi-cloud computing is changing the way data, metadata and AI applications are integrated. For example, data is collected from lab tools in different on-premise networks. Later, it is stored and processed in one or multiple clouds, leveraging AI frameworks in services such as IBM watsonx. In the Lab that Learns, experimental data is organized automatically and can be accessed from anywhere and any version of an experiment can be reproduced from any point in time. Workflows can be analyzed from beginning to end to optimize execution, and patterns are discovered by continuously learning from all data.
This is how we conceived and implemented the Lab that Learns. Together, let’s imagine what this technology can do for the world.