The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. We aimed to analyze the possible role of Covid-19 in bringing explainable AI (XAI) to the forefront of biomedical re-search. We automatically extracted from the PubMed database biomedical XAI studies related to concepts of causality or explainability, and manually labeled 1,603 papers with respect to XAI cat-egories. To compare the trends pre- and post- Covid-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of Covid-19 in the be-ginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies, and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.