The goal behind Domain Adaptation (DA) is to leverage the labeled examples from a source domain to infer an accurate model for a target domain where labels are not available or in scarce at the best. Recently, there has been a surge in adversarial learning based deep-net approaches for DA problem - a prominent example being DANN approach . These methods require a large number of source labeled examples to infer a good model for the target domain; but start performing poorly with reduced labels. In this paper, we study the behavior of such approaches (especially DANN) under such scarce label scenarios. Further, we propose an architecture, namely TRAVERS, that amalgamates TRAnsductive learning principles with adVERSarial learning so as to provide a cushion to the performance of these approaches under label scarcity. Experimental results (both on text and images) show a significant boost in the performance of TRAVERS over approaches such as DANN under scarce label scenarios.