“Integrating TensorFlow optimizations powered by Intel’s oneAPI Deep Neural Network library into the IBM Watson NLP Library for Embed, led to an upwards of 165% improvement in function throughput on text and sentiment classification tasks on Intel fourth-gen Xeon Scalable Processors,” said Bill Higgins, director of development for Watson AI at IBM Research. “This improvement in function throughput results in shorter duration of inference from the model, leading to quicker response time when embedding Watson NLP Library in our client’s offerings.”
“The IBM Watson NLP Library for Embed is a great product for ecosystem partners looking to infuse Natural Language Processing into their enterprise solutions. Intel is excited to collaborate with IBM on Watson NLP’s adoption of the oneAPI Deep Neural Network library and Intel’s optimizations for TensorFlow,” said Wei Li, Intel VP, and general manager for AI and analytics. “The performance acceleration delivered through our software integration work with IBM and the new Intel fourth-gen Xeon Scalable Processors have resulted in upwards of 165% improvement in function throughput for text and sentiment classification tasks. We look forward to the continued partnership with IBM to explore the next frontiers of NLP and beyond through our software and hardware AI acceleration.”
The IBM Watson Natural Language Processing Library helps to develop enterprise-ready solutions through robust AI models, extensive language coverage and scalable container orchestration. The library form provides the flexibility to deploy natural language AI in any environment, and clients can take advantage of the improved performance on Intel fourth-gen Xeon Scalable Processors.
In conclusion, the integration of Intel OneAPI and IBM Watson’s NLP Library provides a significant breakthrough in the field of AI. The performance advantages of OneAPI and the advanced capabilities of IBM Watson’s NLP library enable the acceleration of NLP tasks and text analysis with state-of-the-art model accuracy.
Hardware setup: Intel Xeon Silver 4210 CPU @ 2.20GHz (2 vCPUS), Intel Xeon Platinum 8480+ @ 2.0GHz (2 vCPUS), 20G Memory and 2vCPUs setup via Docker runtime. Software setup: Watson NLP Library for Embed 1.0 or higher. ↩