Avishai Gretz, Alon Halfon, et al.
EMNLP 2023
We develop the mathematical formulation for teaching generative models to a learner whose learning processes and cognitive behaviors may be analytically intractable, but can be simulated by numerical processes. The model considers the learner's bias (prior knowledge) or memory process by using stochastic models. We also present an optimization framework for solving the involved non-convex, stochastic optimization problems associated with machine teaching. The algorithm design and the conditions and analysis are discussed for local convergence properties of the proposed optimization algorithms. In the paper, we discuss a number of example cases to illustrate the algorithmic ideas and demonstrate their efficiency.
Avishai Gretz, Alon Halfon, et al.
EMNLP 2023
Ashish Ranjan
DAC 2025
Daiki Kimura, Tatsuya Ishikawa, et al.
IPSJ 2024
Ching-Huei Tsou, Michal Ozery-Flato, et al.
ISMB 2025