Workshop paper

Multi-task Code LLMs: Data Mix or Model Merge?

Abstract

A recent research trend advocating for smaller, specialized code LLMs in agentic frameworks in conjunction with frontier ones has sparked interest in developing efficient strategies for multi-task learning while balancing performance, resource constraints, and deployment costs of such models. We investigate optimal approaches for creating small, multi-task code LLMs by comparing data mixing versus model merging strategies. We conduct extensive experiments across two model families (Qwen Coder and DeepSeek Coder) at two scales (2B and 7B parameters), fine-tuning them for code generation and code summarization tasks. Our evaluation on HumanEval, MBPP, and Code-to-Test (CodeXGlue) benchmarks reveals that model merging achieves the best overall performance at larger scale across model families, retaining 96% of specialized model performance on code generation tasks while maintaining summarization capabilities. Notably, merged models can even surpass individually fine-tuned models, with our best configuration of Qwen Coder 2.5 7B model achieving 92.7% Pass@1 on HumanEval compared to 90.9% for its task-specific fine-tuned equivalent. At smaller scale we find instead data mixing to be a preferred strategy to obtain a capable multi-task model. We further introduce a weight analysis technique to understand how different tasks affect model parameters and their implications for merging strategies. The results suggest that careful merging and mixing strategies can effectively combine task-specific capabilities without significant performance degradation, making them ideal for resource-constrained deployment scenarios.