Foundations and Applications in Large-scale AI Models: Pre-training, Fine-tuning, and Prompt-based Learning
Abstract
Deep learning techniques have advanced rapidly in recent years, leading to significant progress in pre-trained and fine-tuned large-scale AI models. For example, in the natural language processing domain, the traditional "pre-train, fine-tune" paradigm is shifting towards the "pre-train, prompt, and predict" paradigm, which has achieved great success on many tasks across different application domains such as ChatGPT/BARD for Conversational AI and P5 for a unified recommendation system. Moreover, there has been a growing interest in models that combine vision and language modalities (vision-language models) which are applied to tasks like Visual Captioning/Generation. Considering the recent technological revolution, it is essential to have a workshop at the KDD conference that emphasizes these paradigm shifts and highlights the paradigms with the potential to solve different tasks. This workshop will provide a platform for academic and industrial researchers to showcase their latest work, share research ideas, discuss various challenges, and identify areas where further research is needed in pre-training, fine-tuning, and prompt-learning methods for large-scale AI models. The workshop will also foster the development of a strong research community focused on solving challenges related to large-scale AI models, providing superior and impactful strategies that can change people’s lives in the future. We invite submissions of long (eight papers) and short (four pages) papers, representing original research, preliminary research results, and proposals for new work in academia or industry. All submissions will be single-blind and will be peer-reviewed by an international program committee of researchers and industrial professionals and experts. Accepted submissions will be required to be presented at the workshop and will be published in a dedicated workshop proceeding by the workshop organisers.