• Washington, D.C., USA
This event has ended.


The ACM SIGKDD Conference on Knowledge Discovery and Data Mining is one of the leading annual conferences on data science, data mining, knowledge discovery, large-scale data analytics, and big data. The event convenes researchers and practitioners from across disciplines to share best practices and discuss their work.

IBM researchers are presenting hands-on tutorials on how to use our Toolkit for Time Series Anomaly Detection and on Gradual AutoML using Lale. Our experts are also co-organizing seven workshops at KDD and have co-authored eight main conference papers.

Why attend

Learn about the latest trends in data mining and knowledge discovery. Meet IBM experts working in this space.


  • We propose an extension to the transformer neural network architecture for general-purpose graph learning by adding a dedicated pathway for pairwise structural information, called edge channels. The resultant framework - which we call Edge-augmented Graph Transformer (EGT) - can directly accept, process and output structural information of arbitrary form, which is important for effective learning on graph-structured data. Our model exclusively uses global self-attention as an aggregation mechanism rather than static localized convolutional aggregation. This allows for unconstrained long-range dynamic interactions between nodes. Moreover, the edge channels allow the structural information to evolve from layer to layer, and prediction tasks on edges/links can be performed directly from the output embeddings of these channels. We verify the performance of EGT in a wide range of graph-learning experiments on benchmark datasets, in which it outperforms Convolutional/Message-Passing Graph Neural Networks. EGT sets a new state-of-the-art for the quantum-chemical regression task on the OGB-LSC PCQM4Mv2 dataset containing 3.8 million molecular graphs. Our findings indicate that global self-attention based aggregation can serve as a flexible, adaptive and effective replacement of graph convolution for general-purpose graph learning. Therefore, convolutional local neighborhood aggregation is not an essential inductive bias.

    Md Shamim Hussain (RPI); Mohammed Zaki (RPI); Dharmashankar Subramanian (IBM Research)

  • With the advent of big data across multiple high-impact applications, we are often facing the challenge of complex heterogeneity. The newly collected data usually consist of multiple modalities and are characterized with multiple labels, thus exhibiting the co-existence of multiple types of heterogeneity. Although state-of-the-art techniques are good at modeling complex heterogeneity with sufficient label information, such label information can be quite expensive to obtain in real applications. Recently, researchers pay great attention to contrastive learning due to its prominent performance by utilizing rich unlabeled data. However, existing work on contrastive learning is not able to address the problem of false negative pairs, i.e., some `negative' pairs may have similar representations if they have the same label. To overcome the issues, in this paper, we propose a unified heterogeneous learning framework, which combines both the weighted unsupervised contrastive loss and the weighted supervised contrastive loss to model multiple types of heterogeneity. We first provide a theoretical analysis showing that the vanilla contrastive learning loss easily leads to the sub-optimal solution in the presence of false negative pairs, whereas the proposed weighted loss could automatically adjust the weight based on the similarity of the learned representations to mitigate this issue. Experimental results on real-world data sets demonstrate the effectiveness and the efficiency of the proposed framework modeling multiple types of heterogeneity.

    Lecheng Zheng (University of Illinois at Urbana-Champaign); Jinjun Xiong (University at Buffalo); Yada Zhu (IBM Research); Jingrui He (University of Illinois at Urbana-Champaign)