Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Jie Ren, Zhenwei Dai, et al.
NeurIPS 2025
Instead of querying LLMs in a one-shot manner and hoping to get the right answer for a reasoning task, we propose a paradigm we call \emph{verbalized algorithms} (VAs), which leverage classical algorithms with established theoretical understanding. VAs decompose a task into elementary operations on natural language strings and limits the scope of LLMs to only those operations where they are absolutely necessary. For example, for sorting a series of natural language strings, \emph{verbalized sorting} uses an LLM as a binary comparison oracle in a known and well-analyzed sorting algorithm (e.g., bitonic sorting network). We demonstrate the effectiveness of this approach on sorting and clustering tasks.
Jie Ren, Zhenwei Dai, et al.
NeurIPS 2025
Byungchul Tak, Shu Tao, et al.
IC2E 2016
Tian Gao, Amit Dhurandhar, et al.
NeurIPS 2025
Kevin Gu, Eva Tuecke, et al.
ICML 2024