AI and Cognitive Computing

Explore the future of AI with IBM Research

 

Watch introduction

Explore projects

AI research @ IBM

Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by side with humans, accelerating our ability to create, learn, make decisions and think. We also architect the future of Watson, which has evolved from an IBM Research project to the world’s first and most-advanced AI platform. Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.

Distributed deep learning software achieves record performance

Explore

New IBM Research distributed deep learning software achieves record performance for large neural network, large data set

Image of brains

Deep learning is a widely used AI method to help computers understand and extract meaning from images and sounds and other data types using neural networks, a brain-inspired approach to computing. It holds promise to fuel breakthroughs in everything from consumer mobile app experiences to medical imaging diagnostics. Progress in accuracy and deploying deep learning at scale is limited by technical challenges that slow processing time to days and weeks. IBM Research AI experts have created distributed deep learning software, achieving record performance for image recognition accuracy and large neural networks composed of up to 250 GPUs, a special processor for large amounts of data. Developers and data scientists can now preview this technical milestone in version 4 of the PowerAI enterprise deep learning software.

Read blog

Read research paper


IBM TrueNorth Neurosynaptic System

Explore

IBM TrueNorth Neurosynaptic System

IBM researchers -- led by Dharmendra S. Modha, R&D Magazine’s 2016 Scientist of the Year -- developed a chip with the potential to integrate brain-like capability into mobile devices. This revolutionary new design is the culmination of over a decade of research by IBM. It can be used in many fields including public safety, vision assistance for the blind, home health monitoring and transportation.

Learn more

Read blog


First movie trailer created by AI

Explore

IBM Research Takes Watson to Hollywood

Sixty-five percent of movie-goers watch trailers on YouTube to help them pick a movie. 20th Century Fox turned to IBM Research to help make a new trailer for its film, "Morgan." Using experimental Watson APIs, the team analyzed visuals, audio and composition from 100 horror movie trailers to select the best moments for the first A.I.-generated movie trailer.

Learn more

Learn more about computer vision @ IBM Research


Brain-inspired AI supercomputing system

Explore

U.S. Air Force Research Lab Taps IBM to Build Brain-Inspired AI Supercomputing System

ai with the airforce

IBM Research and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

Read press release


Cognitive assistant for the visually impaired

Explore

Cognitive assistant for the visually impaired

IBM is making the real world more accessible for the visually challenged with a “cognitive assistant” to help them “see” and interact more fully with their surroundings.. Together with Carnegie Mellon University, IBM researchers open sourced a platform to help researchers and developers invent new technologies to create engaging experiences for the visually impaired..

Learn more

Learn more about computer vision @ IBM Research


Creating a window into mental health

Explore

With AI, our words will be a window into our mental health

Cognitive computers will analyze a patient’s speech or written words to look for tell-tale indicators found in language, including meaning, syntax and intonation. Combining the results of these measurements with those from wearables devices and imaging systems (MRIs and EEGs) can paint a more complete picture of the individual for health professionals to better identify, understand and treat the underlying disease, be it Parkinson’s, Alzheimer’s, Huntington’s disease, PTSD or even neurodevelopmental conditions such as autism and ADHD.

At IBM, scientists are using transcripts and audio inputs from psychiatric interviews, coupled with machine learning techniques, to find patterns in speech to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression. Today, it only takes about 300 words to help clinicians predict the probability of psychosis in a user.

Learn more

AI and machine learning aid schizophrenia research


IBM builds a real-time, gesture recognition system

Explore

Gesture recognition

IBM scientists used a special iniLabs DVS128 event camera modeled after the mammalian retina with a TrueNorth processor running a neural network they trained to recognize 10 different hand and arm gestures. Unlike a conventional camera and chip like the one in your phone, the system is event-based, meaning it only reacts if there’s a change in what it’s seeing. This enables the system to run with much less power -- under 200 mW. This model could enable AI applications efficient enough to be powered off the battery in a smartphone or a self-driving car, for example. The team is also making the data they used to train their neural network available for download – one of the first event-based datasets provided to the field. The TrueNorth developer community can use the dataset to train their networks to recognize these 10 gestures. The wider community of event-based vision researchers can also use the dataset to benchmark other algorithms and hardware.

Access dataset

Learn more about computer vision @ IBM Research


AI for cancer detection

Explore

AI for cancer detection

In any given day, radiologists can review thousands of images to make health diagnoses. To make critical decisions, they typically piece together multiple sources of clinical information visually and manually, including electronic health records, research publications and other data. To address this, IBM researchers have harnessed the cognitive computing power of IBM Watson to analyze large amounts of imaging and text in electronic health records. In a new demo developed in collaboration with the Radiological Society, radiologist can select a sample patient case and see how a Watson-powered prototype surfaces insights from the case as it understands, reasons and learns from text and imaging data in real time.

Learn more

Learn more about computer vision @ IBM Research


Discover what IBM is disrupting today