Invariant learning aims to train models robust to nuisance confounding that may be present in the data. This is typically achieved by minimizing some measure of dependence between learned representations or predictions and confounding factors. However, accurate estimation as well as reliable minimization of typically used dependence measures can be challenging. Chi square divergence based dependence measure has recently been found effective in enforcing fairness through learning invariant representations. We show that with an appropriate parameterization, this choice both improves dependence estimation quality and simplifies its minimization. Empirically, we find that our proposal is effective at fair predictor learning and domain generalization.