The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets. However, as with many other NLU tasks, the dominant language is English, with resources in other languages being few and far between. In this work, we explore the potential of transfer learning using the multilingual BERT model to address argument mining tasks in non-English languages, based on English datasets and the use of machine translation. We show that such methods are well suited for classifying the stance of arguments, but less so for assessing their quality, presumably because quality is harder to preserve under translation. In addition, focusing on the translate-train approach, we show how the choice of languages to translate into, and the relations among them, effect the accuracy of the resultant model. Finally, we provide a human-generated dataset in multiple languages with more than 10K collected arguments, as well as machine translation of the English datasets, to facilitate evaluation of transfer learning methods on argument mining tasks.