Technical note
4 minute read

How Intel oneAPI tools are accelerating IBM's Watson Natural Language Processing Library

From virtual assistants to language translation, natural language processing (NLP) has revolutionized the way we interact with technology. However, improving the performance of NLP tasks remains a critical challenge. Integrating Intel’s OneAPI and IBM Watson’s NLP Library has significant implications for various industries. It can accelerate the performance of various NLP tasks, including sentiment analysis, topic modeling, named entity recognition, keyword extraction, text classification, entity categorization, and word embeddings. We’ll break down what kind of NLP tasks can be accelerated using OneAPI, how it enhances the performance of NLP tasks, and its future applications in various industries.

Introduction to IBM Watson’s NLP Library and Intel OneAPI

IBM Watson’s Natural Language Processing (NLP) Library is an advanced machine learning tool that facilitates the analysis of text data. It is widely used in various industries for tasks such as sentiment analysis, entity extraction, and language detection. Intel OneAPI is an integrated, cross-architecture programming model that enables developers to create high-performance, heterogeneous applications for a range of devices. It provides a unified programming environment that supports various hardware architectures such as CPUs, GPUs, FPGAs, and other accelerators. The integration of Intel OneAPI and IBM Watson NLP library can significantly improve the performance of NLP tasks.

In October 2022, IBM expanded the embeddable AI portfolio with the release of IBM Watson Natural Language Processing Library for Embed, which is designed to help developers provide capabilities to process human language to derive meaning and context through intent and sentiment. The Watson NLP team out of IBM Research, partnered with Intel to improve Watson NLP performance with Intel oneDNN and Tensorflow, powered by OneAPI, and demonstrated benefits of up to 35% in function throughput for key NLP tasks when comparing the same workload on third-generation processors. With the latest advancements in fourth-generation Xeon Scalable Processor, the overall performance in function throughput for NLP tasks has seen upwards of 165% improvement compared to the second-generations Xeon Scalable Processors currently deployed by IBM for inference on CPUs.

IBM Watson Natural Language Processing Library for Embed

The Watson NLP Library for Embed provides common NLP features, including:

  • Sentiment analysis, which classifies text to positive, negative, and neutral sentiment
  • Entity extraction, to detect and extract named entities such as personally identifiable information (PII)
  • Text classification, to label any given text and assign it to a specific category or group.

These features provide value in a wide variety of use cases including voice of the customer, brand intelligence and contracts processing. Since the Watson NLP Library for Embed is containerized, it can be deployed to run anywhere, and is easily embeddable in any offering. This allows IBM ecosystem partners the ability to infuse NLP AI into their solutions, using models that have been trained by experts in IBM Research, removing hate, bias, and profanity, and curated specifically for enterprise use cases.

Understanding OneAPI’s Performance Advantages

OneAPI is a significant breakthrough in the AI landscape.It enables the creation of high-performance applications that can be executed across different hardware architectures seamlessly. One of the primary advantages of OneAPI is the ability to optimize performance through parallelization. An application can leverage the computing power of various hardware architectures in parallel, thus improving its overall performance. Additionally, OneAPI provides a unified programming model that simplifies the development process and reduces development time, enabling developers to focus on optimizing their application’s performance.

Intel optimizations powered by OneAPI

Intel optimizes deep learning frameworks, including Tensorflow and PyTorch, with OneAPI Deep Neural Network Library (oneDNN) library, an open-source, cross-platform performance library of basic building blocks for deep learning applications. This utilizes new hardware features and accelerators on Intel Xeon based infrastructure. These optimizations are targeted to accelerate key performance-intensive operations, such as convolution, matrix multiplication, batch normalization, recurrent neural network (RNN) cells, and long short-term memory (LSTM) cells. oneDNN also leverages graph mode computation by fusing operations that are compute- and memory-bound to further speed up computation. oneDNN optimizations are available by default in the official TensorFlow release version starting with 2.9.

Testing out the Watson NLP Library for Embed

When testing1 the Watson NLP Library for Embed with oneDNN optimizations, text and sentiment classification tasks showed the greatest improvements in both duration and function throughput. When comparing the same workload on third-generation processors, there was a 35% improvement in function throughput for these tasks. However, when testing on the latest Intel fourth-generation Xeon Scalable Processors, sentiment classification function throughput improved up to 165%, with the largest improvements seen in BERT and TRANSFORMER architectures. These two architectures used in NLP are best to use when there is a requirement for high-quality model inference and higher computing resources are available.

PerformanceComparison(2).png

“Integrating TensorFlow optimizations powered by Intel’s oneAPI Deep Neural Network library into the IBM Watson NLP Library for Embed, led to an upwards of 165% improvement in function throughput on text and sentiment classification tasks on Intel fourth-gen Xeon Scalable Processors,” said Bill Higgins, director of development for Watson AI at IBM Research. “This improvement in function throughput results in shorter duration of inference from the model, leading to quicker response time when embedding Watson NLP Library in our client’s offerings.”

“The IBM Watson NLP Library for Embed is a great product for ecosystem partners looking to infuse Natural Language Processing into their enterprise solutions. Intel is excited to collaborate with IBM on Watson NLP’s adoption of the oneAPI Deep Neural Network library and Intel’s optimizations for TensorFlow,” said Wei Li, Intel VP, and general manager for AI and analytics. “The performance acceleration delivered through our software integration work with IBM and the new Intel fourth-gen Xeon Scalable Processors have resulted in upwards of 165% improvement in function throughput for text and sentiment classification tasks. We look forward to the continued partnership with IBM to explore the next frontiers of NLP and beyond through our software and hardware AI acceleration.”

The IBM Watson Natural Language Processing Library helps to develop enterprise-ready solutions through robust AI models, extensive language coverage and scalable container orchestration. The library form provides the flexibility to deploy natural language AI in any environment, and clients can take advantage of the improved performance on Intel fourth-gen Xeon Scalable Processors.

In conclusion, the integration of Intel OneAPI and IBM Watson’s NLP Library provides a significant breakthrough in the field of AI. The performance advantages of OneAPI and the advanced capabilities of IBM Watson’s NLP library enable the acceleration of NLP tasks and text analysis with state-of-the-art model accuracy.

Date

References

  1. Hardware setup: Intel Xeon Silver 4210 CPU @ 2.20GHz (2 vCPUS), Intel Xeon Platinum 8480+ @ 2.0GHz (2 vCPUS), 20G Memory and 2vCPUs setup via Docker runtime. Software setup: Watson NLP Library for Embed 1.0 or higher.