Site Reliability Engineers (SREs) play a key role in identify- ing the cause of an issue and performing remediation steps to resolve it. After an issue is reported, SREs come together in a virtual room (collaboration platform) to triage the issue. While doing so, they leave behind a wealth of information, in the form of conversations, which can be used later for triaging similar issues. However, usability of these conversations offer challenges due to them being noisy and scarcity of conversa- tion utterance label. This paper presents a novel approach for issue artefact extraction from noisy conversations with mini- mal labelled data. We propose a combination of unsupervised and supervised models with minimal human intervention that leverages domain knowledge to predict artefacts for a small amount of conversation data and use that for fine-tuning an already pre-trained language model for artefact prediction on a large amount of conversation data. Experimental results on our dataset show that the proposed ensemble of the unsupervised and supervised models is better than using either one of them individually. We also present a deployment case study of the proposed artefact prediction.