Important information
Location: | IBM Research – Israel Haifa site, University of Haifa Campus, Mt. Carmel |
Date: | Tuesday, March 04, 2025 |
Time: | 12:00 - 17:00 |
Moderator: | Dr. Noam Slonim, IBM |
Ubiquitin Proteolytic System - From Basic Mechanisms thru Human Diseases and on to Drug DevelopmentNobel Prize winner Prof. Aaron Ciechanover
Between the 50s and 80s, most studies in biomedicine focused on the central dogma - the translation of the information coded by DNA to RNA and proteins. Protein degradation was a neglected area, considered to be a non-specific, dead-end process. While it was known that proteins do turn over, the high specificity of the process - where distinct proteins are degraded only at certain time points, or when they are not needed any more, or following denaturation/misfolding when their normal and active counterparts are spared - was not appreciated. The discovery of the lysosome by Christian de Duve did not significantly change this view, but gradually it has become clear that this organelle is involved mostly in the degradation of extracellular proteins, but mostly that the lysosomal proteases cannot be substrate-specific. The discovery of the complex cascade of the ubiquitin-proteasome pathway solved the enigma. It is clear now that degradation of cellular proteins is a highly complex, temporally controlled, timed and tightly regulated process that plays major roles in a broad array of basic cellular processes such as cell cycle and differentiation, communication of the cell with the extracellular environment and maintenance of the cellular quality control. With the multitude of substrates targeted and the myriad processes involved, it is not surprising that aberrations in the pathway have been implicated in the pathogenesis of many diseases, certain malignancies and neurodegenerative disorders among them, and that consequently, the system has become a major platform for drug development.
NLP in the Wild — Harnessing Large Language Models for Multi-Disciplinary ResearchDr. Gabriel Stanovsky, Hebrew University
Natural language processing has made unprecedented strides in recent years, with large language models (LLMs) achieving state-of-the-art results while generalizing to new tasks, domains, and datasets. These advancements open the door for the application of LLMs in answering long-standing research questions in various disciplines, where the bulk body of knowledge is stored using free-form natural language, including archaeology, law, medicine, and more. In this talk I will outline steps made in this direction, showing great promise and novel challenges, such as dealing with low-resource languages and long inputs which span many documents.
Technology and Biology Intertwined: Exploring Cross-Disciplinary InnovationsProf. Ora Schueler-Furman, Hebrew University
This year’s Nobel Prize in Chemistry highlights two major breakthroughs of AI in biology: It was awarded to Demis Hassabi and John Jumper of Google Deepmind, for their developers of Alphafold, a deep learning protocol that predicts protein structure from sequence at unprecedented speed and accuracy, solving a decades-old challenge. The other half went to David Baker from the University of Washington, Seattle, for protein design: The Rosetta suite developed in his group made it possible to design completely novel proteins from scratch. The impact of both has been, and is, huge: The availability of protein structure models for billions of proteins has advanced biological understanding to the next level, and new design capabilities allow simple, effective protein, and recently also small molecule design that will impact our lives immensely, ranging from Medicine with improved drugs for the treatments for diseases, to Industry with improved degradation of plastic and other waste, and much more. Deep learning has found a fruitful data rich and complex field.
To a certain point, this allows to focus now on basic as well as applied questions without the need of extensive method development. I will present the major steps that have made this revolution possible and describe exciting new cross-disciplinary applications that only a few years ago were mere dreams. I will also mention the efforts made by the design community to adhere to ethical standards that prevent misuse of the powerful new tools.
System 2 in Visual Generative AIGal Chechik, Sr. Director of AI Research at NVIDIA, Professor at BIU
Between training and inference, lies a growing class of AI problems that involve fast optimization of a pre-trained model for a specific inference task. Like the human cognitive "system-2" these are not pure “feed-forward” inference problems applied to a pre-trained model, because they involve some non-trivial inference-time optimization beyond what the model was trained for; neither are they training problems, because they focus on a specific input. These compute-heavy inference workflows raise new challenges in machine learning and open opportunities for new types of user experiences and use cases. In this talk, I focuse on various system-2 problems in vision generation, including few-shot fine-tuning and inference-time optimization. I'll cover personalization of vision-language models using textual-inversion techniques, and techniques for model inversion, prompt-to-image alignment and consistent generation. I will also discuss the generation of rare classes, and future directions.