We consider the use of error-control codes and decoding algorithms to perform reliable classification using unreliable and anonymous human crowd workers by adapting coding-theoretic techniques for the specific crowdsourcing application. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We demonstrate the effectiveness of the proposed coding scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that good codes may improve the performance of the crowdsourcing task over typical majority-vote approaches. © 2013 IEEE.