Aditya Kashyap, Maria Anna Rapsomaniki, et al.
TIBTECH
Transformer-based language models have become the de facto standard in natural language processing. However, they underperform in the tabular data domain compared to traditional tree-based methods. We posit that current models fail to achieve the full potential of language models due to (i) heterogeneity of tabular data; and (ii) challenges faced by the model in interpreting numerical values. Based on this hypothesis, we propose the Tabular Domain Transformer (TDTransformer) framework. TDTransformer has distinct embedding processes for different types of columns. The alignment layers for different column-types transform these embeddings to a common space. Besides, TDTransformer adapts piece-wise linear encoding for numerical values for better performance. We test the proposed method on 76 real-world tabular classification datasets from the OpenML benchmark. Extensive experiments indicate that TDTransformer improves the state-of-the-art methods.
Aditya Kashyap, Maria Anna Rapsomaniki, et al.
TIBTECH
Simone Magnani, Stefano Braghin, et al.
Big Data 2023
Oktie Hassanzadeh, Parul Awasthy, et al.
ISWC 2022
Raphaël Pestourie, Youssef Mroueh, et al.
npj Computational Materials