Information is at the center of decision making in many systems and use-cases. In cooperative or hostile environments, agents communicate their subjective opinions about various phenomenon. However, sources of these opinions are not always competent and honest but often erroneous or even malicious. Furthermore, malicious sources may adopt certain behaviors to mislead the decision maker in a specific way. Fortunately, the reports of such misleading sources are correlated to ground truth. Using statistical methods, one can learn how likely a source distorts the ground truth and the associated distortion models so that reports from these sources can still be fused to enhance estimation of the ground truth. In this work, we propose to learn a number of statistically meaningful opinion transformations that represent various behaviors of information sources. Then, we exploit these transformations while fusing opinions from unreliable sources. Using real data from the Web and through extensive comparison with recent trust-based approaches over simulations, we show that our approach can be used to determine set of transformations that may lead to more accurate estimation of the truth.