COBOL programmers are getting harder to find. IBM’s code-writing AI can help
IBM’s new modernization solution, watsonx Code Assistant for IBM Z, lets developers selectively translate COBOL applications to high-quality Java code optimized for IBM Z and the hybrid cloud.
IBM’s new modernization solution, watsonx Code Assistant for IBM Z, lets developers selectively translate COBOL applications to high-quality Java code optimized for IBM Z and the hybrid cloud.
COBOL, the first programming language for business, helped build the modern software industry. Six decades later, COBOL-powered mainframes now process an estimated 70% of banking transactions globally.
Applications based on COBOL have stuck around for as long as they have because of their security, reliability, and transactional performance. But keeping these applications maintained and updated is a growing challenge. There just aren’t as many COBOL coders as there used to be, and the rise of cloud computing has led to a shift towards modular business applications that tend to be written in Java.
As enterprises look to modernize, they are looking for skilled developers that can understand mainframe applications, and use tools designed specifically to break up and transform complex, large mainframe applications.
To simplify the mainframe modernization journey, IBM has developed watsonx Code Assistant for IBM Z, a solution powered by automated tooling and IBM’s 20-billion parameter “Granite” large language model for code that allows enterprises to transform monolithic COBOL applications into services optimized for IBM Z.
Enterprises can selectively modernize applications with the greatest return and integrate them with existing services to maintain IBM Z’s high performance. IBM will make this solution available to customers on Thursday, October 26.
“Generative AI can make modernization less overwhelming for enterprises,” said Ruchir Puri, chief scientist at IBM Research. “Watsonx Code Assistant for Z can help companies selectively and incrementally refactor business services in COBOL, prioritizing tasks with the greatest payoff first.”
Application modernization can be slow-going and risky, especially for banks and governments that provide mission-critical services. Large applications have interdependent features that have been repeatedly updated over the years. Disentangling them into stand-alone services can be complex and time-consuming. And once refactoring is complete, developers face the additional hurdle of translating those services from COBOL to Java.
IBM watsonx Code Assistant for Z has several capabilities designed to save developers time and reduce the chance of mistakes. An application discovery tool analyzes the application and maps out its dependencies, giving developers a high-level understanding of how the code works. An automated refactoring tool takes this information and helps developers identify the business services they want to extract and refactor. Metadata collected during these first two steps provides critical context for a third capability, powered by generative AI, if the client chooses.
A large language model (LLM), trained on the world’s programming languages and tuned on pairs of COBOL-Java programs for IBM Z, takes the refactored COBOL program and translates it to Java. An automated unit-testing tool, to be released later, will allow developers to quickly test and validate the newly transformed Java code.
The solution is designed to operate with the full suite of Z/OS applications and environments.
IBM trained the base model on open-source code repositories like GitHub, after extensive filtering for toxic, sensitive, or copyright-protected code. In total, more than 1.6 trillion code tokens (words and parts of words) went into training the base Granite model. To strengthen the model’s understanding of COBOL and Java, it was fine-tuned on thousands of pairs of enterprise programs in both languages. Critical gaps were filled in with AI-generated synthetic code.
When comparing between watsonx Code Assistant for IBM Z and ChatGPT (OpenAI’s LLM-powered chatbot), IBM researchers found that WCA for Z outperformed ChatGPT on COBOL translation.
Several factors explain the IBM Granite model’s high performance:
The first is the quality of the data used to tune and customize the model. IBM programmers fluent in COBOL and Java worked side-by-side to create thousands of pairs of functionally equivalent programs for IBM Z.
It turns out that translating code literally, line-by-line, works about as well for programs as it does for natural language. Translate COBOL to Java this way and you get ‘JOBOL,’ code that’s difficult to read and maintain. IBM programmers worked carefully to ensure that the syntax of each COBOL program was expressed correctly in Java.
“We know Cobol and Java on z/OS better than anyone,” said Richard Larin, product lead for IBM watsonx Code Assistant for Z. “We’re imparting that knowledge to our large language model so it knows how to handle our customers’ use cases.”
Sometimes that means keeping a microservice in COBOL, and not translating it to Java. “It can mix and match,” Puri said. “Since IBM Z supports the efficient deployment of both COBOL and Java applications, it can offer high-performance, resilience, and security for modernized applications in either language.”
The model’s second comparative advantage is the size of its training data: 1.6 trillion tokens of code compared to less than 1 trillion tokens for most code-only models on the market. The more code tokens a model ingests, the better it typically performs on a variety of coding tasks.
The model’s third advantage is the length of its context window, which is the number of tokens that can be squeezed into the user’s prompt. The model’s 32,000-token context window allows it to better capture the complexities of both applications.
The watsonx Code Assistant solution for Z is IBM’s latest effort to modernize code with the help of foundation models, AI models pre-trained on datasets the size of the internet that can be redeployed for many tasks.
In 2021, IBM Research launched Project CodeNet, a dataset aimed at teaching AI systems how to program, which was cited as a source for Deep Mind’s AlphaCode project. Project CodeNet was followed by Project Wisdom for Red Hat Ansible, an effort to use code-writing AI models to build IT automations in plain English.
This work also underlies watsonx Code Assistant for Red Hat Ansible Lightspeed, an AI service that streamlines Ansible code-writing to help enterprises accelerate IT automation across the business. The solution is currently in tech preview.
So far, IBM Consulting has cut the time it takes to generate Ansible playbooks for automating IT workflows by 30%.