Computer-assisted scientific discovery promises to revolutionise how humans discover new materials, find novel drugs or identify new uses for existing ones, and improve clinical trial design and efficiency. The potential of technology to accelerate scientific discovery when the space of possible candidate solutions is too large for human evaluation alone is unlike anything we’ve seen before. Research into the tools and technologies required to enable accelerate discovery is an emerging and rapidly evolving field. In drug discovery, for example, learning a good protein and molecule representations is a fundamental step to applying predictive and generative models and propose new candidate compounds. But the potential of these methods to accelerate discovery is not fully uncovered. Combining existing, background knowledge from sources, such as scientific literature together with human expertise, in computable knowledge representations may enhance predictive and generative models for candidate solution generation. In this talk I will explore a basic question – can we revisit existing rich knowledge to uncover what hasn’t been yet discovered? I will share a perspective on research and core technologies that may help us accelerate scientific discovery by leveraging multisource and multimodal knowledge, from extraction to consumption. This perspective is grounded in practical experiences built from real-world shared knowledge challenges across diverse projects. I will draw on lessons learned from experiences gained supporting governments and healthcare agencies to safeguard the integrity of providers’ claims by creating human readable and machine consumable knowledge from policy text, or supporting the integration of scientific literature with person-centred (social, health or behavioural) data to improve the identification of 'at-risk' cohorts.