Compress then Serve: Serving Thousands of LoRA Adapters with Little OverheadRickard GabrielssonJiacheng Zhuet al.2025ICML 2025
LiveXiv - A Multi-Modal live benchmark based on Arxiv papers contentNimrod ShabtayFelipe Maia Poloet al.2025ICLR 2025
NumeroLogic: Number Encoding for Enhanced LLMs' Numerical ReasoningEliyahu SchwartzLeshem Choshenet al.2024EMNLP 2024
Fuse to Forget: Bias Reduction and Selective Memorization through Model FusionKerem ZamanLeshem Choshenet al.2024EMNLP 2024
Deductive Closure Training of Language Models for Coherence, Accuracy, and UpdatabilityAfra Feyza AkyürekEkin Akyüreket al.2024ACL 2024
Data Contamination Report from the 2024 CONDA Shared TaskOscar SainzIker García-ferreroet al.2024ACL 2024
Asymmetry in Low-Rank Adapters of Foundation ModelsJiacheng ZhuKristjan Greenewaldet al.2024ICML 2024
KGKristjan GreenewaldSenior Research Scientist and Manager, Statistical Methods for Large Language Models