Conversational Intelligence

Exploring AI-based conversational systems in a human-centered approach


The Conversational Intelligence Intelligence group at IBM Research - Brazil conducts state-of-the-art research aimed at constantly improving IBM’s Watson technology in three main areas: the understanding of human speech, the theoretical foundations of natural-language processing (NLP), and the processing and production of Brazilian languages (including Portuguese and indigenous languages) by machines. Our team carried out pioneering work on the design and evaluation of conversational systems, neuro-symbolic classification of the intent of human utterances, on the use of large language models (LLMs) for speech tasks, social media analytics, and the processing of ultra-low resource languages such as local indigenous languages.

Research topics

Conversational AI

The demand for virtual agents that can handle customer needs has continued to increase dramatically. At IBM Research, we’re building the next generation of artificial intelligence systems that can understand what’s being asked of them and how best to respond as efficiently as possible.

Human-Centered AI

AI systems are proliferating in everyday life, and it’s imperative to understand those systems from a human perspective. We design and investigate new forms of human-AI interactions and experiences that enhance - and extend - human capabilities for the good of our products, clients, and society at large.

Natural Language Processing

Much of the information that can help transform enterprises is locked away in text, like documents, tables, and charts. We’re building advanced AI systems that can parse vast bodies of text to help unlock that data, but also ones flexible enough to be applied to any language problem.


As more of the world moves online, the demand for systems that can understand users and speak to them in natural language is growing exponentially. We're working on next-generation AI that learns to decipher and replicate the way humans speak.

Foundation Models

Modern AI models that execute specific tasks in a single field are giving way to ones that learn more generally, and work across domains and problems. Foundation models, which are trained on large, unlabeled datasets and fine-tuned for an array of applications, are driving this shift.