Quantum computers promise a potentially disruptive approach to improving computation in fields such as physics, chemistry, cryptography, optimisation, and machine learning. However, testing quantum computations for faults is currently impractical because of the existence of noise and errors associated with the output. Executing in a quantum system a circuit with only a few valid output states can generate a significant number of implausible states that have zero probability in an ideal computation. Among other sources of noise, readout errors come from the difficulty of discriminating a measurement between 0 and 1 for the different qubits. These issues are affected by readout drift, requiring regular recalibration of the process. In this paper, we provide a novel technique for post-computation analysis of the output probability distributions that permits better discrimination of kerneled data, delaying the need for recalibration. We achieve this by altering the linear discrimination of the final output states by way of a dynamic state selection process that combines Gaussian mixture models with a probability threshold. As an initial assessment of the technique we examine its effect on three to five qubits GHZ states. Our results on almost every one of nine IBM quantum computers show that the number of implausible states is reduced significantly and that the resulting probability distribution is closer to the expected one.