In most error correction coding (ECC) frameworks, the typical error metric is the bit error rate (BER) which measures the number of bit errors. For this metric, the positions of the bits are not relevant to the decoding, and in many noise models, not relevant to the BER either. In many applications this is unsatisfactory as typically all bits are not equal and have different significance. We consider the problem of bit error correction and mitigation where bits in different positions have different importance. For error correction, we look at ECC from a Bayesian perspective and introduce Bayes estimators with general loss functions to take into account the bit significance. We propose ECC schemes that optimize this error metric. As the problem is highly nonlinear, traditional ECC construction techniques are not applicable. Using exhaustive search is cost prohibitive, and thus we use iterative improvement search techniques to find good codebooks. We optimize both general codebooks and linear codes. We provide numerical experiments to show that they can be superior to classical linear block codes such as Hamming codes and decoding methods such as minimum distance decoding. For error mitigation, we study the case where ECC is not possible or not desirable, but significance aware encoding of information is still beneficial in reducing the average error. We propose a novel number presentation format suitable for emerging storage media where the noise magnitude is unknown and possibly large and show that it has lower mean error than the traditional number format.