Medical ontologies are widely used to describe and organize medical terminologies and to support many healthcare applications. These ontologies are either manually curated (e.g., UMLS, SNOMED CT, and MeSH) or derived from a relational database, potentially using different terminology and structures. Such differences compromise interoperability between data described by different ontologies. Ontology matching is the process of finding semantic correspondences between concepts in two ontologies. Existing solutions to ontology matching have focused on engineering features from terminological, structural, and semantic model information extracted from the ontologies. However, this is often labor intensive and the accuracy of ontology matching varies greatly with different ontologies. In this paper, we propose OntoGNN, a novel framework that learns and unifies multiple facets of concepts for medical ontology matching. We develop three innovative techniques in OntoGNN: (1) a hyperbolic graph convolution layer that encodes hierarchical concepts in the hyperbolic space, (2) a heterogeneous graph layer that encodes both local and global context information of a concept, and (3) a heuristic-based method that enriches the derived ontology with additional semantic information. Experiments on three real-world medical ontologies show that our approach consistently achieves state-of-the-art results. In addition, we derive an ontology from the MIMIC-III dataset, and evaluate OntoGNN on matching it to SNOMED CT, significantly outperforming the state-of-the-art methods.