Characterizing pre-trained and task-adapted molecular representationsCelia CintasPayel Daset al.2023NeurIPS 2023
On Robustness-Accuracy Characterization of Large Language Models using Synthetic DatasetsChing-yun KoPin-Yu Chenet al.2023ICML 2023
Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint DistributionsChanakya EkboteMoksh Jainet al.2022NeurIPS 2022
Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI AcceleratorsJenna BilbreyKristina Hermanet al.2022NeurIPS 2022
Protein Representation Learning by Geometric Structure PretrainingZuobai ZhangMinghao Xuet al.2022ICML 2022
Benchmarking deep generative models for diverse antibody sequence designIgor MelnykPayel Daset al.2021NeurIPS 2021
Grapher: Multi-Stage Knowledge Graph Construction using Pretrained Language ModelsIgor MelnykPierre Dogninet al.2021NeurIPS 2021