News
3 minute read

From unlabeled text to a working classifier in a few hours

Label Sleuth is an open-source tool that lets users with no machine-learning knowledge build a customized text-classification model from scratch. It’s part of IBM’s larger strategy to make time-saving AI tools available to all.

Label Sleuth is an open-source tool that lets users with no machine-learning knowledge build a customized text-classification model from scratch. It’s part of IBM’s larger strategy to make time-saving AI tools available to all.

Text-analysis AI models have become part of everyday life, finishing your sentences, translating the web, and summarizing long passages of text. But adapting them to new tasks typically requires a domain expert to label new examples and a machine-learning expert to train the new model.

“In the real world, you need to tweak and customize the out-of-the-box model,” said Eyal Shnarch, an IBM researcher who specializes in natural language processing. “But there aren’t enough machine-learning experts for everyone who wants a customized model.”

Label Sleuth is meant to change that. The open-source platform allows anyone who works with words to build their own text-classification model. IBM is releasing Label Sleuth this week with academic collaborators at Notre Dame and the University of Texas at Dallas. The work is part of IBM’s continued effort to make open-source natural language processing tools accessible to everyone, including researchers, to allow them to quickly reproduce and build on each other’s work to advance the field.

Label Sleuth can be a helpful resource for anyone tasked with slogging through reams of text, from a lawyer hunting for risky language in a contract, to a historian searching for themes in a stack of records. Label Sleuth walks you through the process of annotating the data and training a classifier to find the needles in a haystack of text.

“Lawyers don’t want to read hundreds of pages of a contract to find the relevant clauses,” said Shnarch, who leads the team that developed Label Sleuth. “They’d rather skim the document quickly with a text classifier that highlights the sentences that require a close read.”

Label Sleuth is designed to be intuitive and to grasp the assigned task quickly. Given a few dozen examples of the text you want to isolate, it can start providing feedback to improve the labeling process. Within a few hours, you have a working model.

Once the model is up and running, human and machine work together to fine-tune it. The model shows the user what text to label next to most improve its performance. It also flags examples that may be incorrectly labeled so that the user can review and possibly correct them. The user, in turn, tells the model when it has made a mistake. As time goes on, less feedback is needed.

“The whole point is interaction and iteration,” said IBM researcher Yannis Katsis. “The goal is to save domain experts time and effort.”

Label Sleuth can also help academic researchers manage their unlabeled text data to save time. But they plan in addition to integrate the tool into experiments to understand how to make human-machine interactions more effective, and how to improve active learning and classification algorithms.

“Label Sleuth can lower the barriers to labeling text data while ensuring data quality by keeping human experts in the loop,” said Toby Li, a professor at Notre Dame and project collaborator. “Label Sleuth’s open-source and extensible nature also makes it useful for researchers to deploy their own new machine learning models, interface features, and interaction strategies.”

“To really make machine learning accessible and practical for real-world applications, it’s critical for machines to learn labels efficiently,” said Rishabh Iyer, a computer science professor at UT Dallas and project collaborator. “Label Sleuth achieves this goal with an intuitive user-interface and good active-learning algorithms on the backend. I’m excited to use Label Sleuth in my own research.”

At this week’s NAACL conference on natural language processing, IBM released Prime QA, the first software library to integrate algorithms for reading and responding to questions in more than 90 languages, and handle question-answering problems embedded in tables, photos, and video.

IBM also released the latest version of a top-performing Abstract Meaning Representation (AMR) semantic parser that translates text into a data structure that captures the text’s meaning. This data structure contains information about people, places, and events mentioned in the text and how they relate, allowing software developers to build on top of the parser. Applications include methods for evaluating the factual accuracy of computer-generated summaries or translating a question into a database query to get an answer.

“We’re innovating at every layer of the NLP stack,” said IBM researcher Shila Ofek-Koifman. “From foundations to applications that help users save time and money analyzing and understanding massive amounts of text.”