On Robustness-Accuracy Characterization of Large Language Models using Synthetic DatasetsChing-yun KoPin-Yu Chenet al.2023ICML 2023
Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and ModelsIoana Baldini SoaresChhavi Yadavet al.2023ACL 2023
Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained ModelsSourya BasuPrasanna Sattigeriet al.2023AAAI 2023
Accelerating Material Design with the Generative Toolkit for Scientific Discovery (GT4SD)Matteo ManicaJoris Cadowet al.2022MRS Fall Meeting 2022
SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic DataChing-yun KoPin-Yu Chenet al.2022NeurIPS 2022
Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint DistributionsChanakya EkboteMoksh Jainet al.2022NeurIPS 2022
Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI AcceleratorsJenna BilbreyKristina Hermanet al.2022NeurIPS 2022
Accelerating Material Design with the Generative Toolkit for Scientific Discovery (GT4SD)Matteo ManicaJoris Cadowet al.2022ACS Fall 2022
Protein Representation Learning by Geometric Structure PretrainingZuobai ZhangMinghao Xuet al.2022ICML 2022