This paper addresses the design of symmetric entropy-constrained multiple description scalar quantizers (EC-MDSQ) with linear joint decoders, i.e., where some of the decoders compute the reconstruction by averaging the reconstructions of individual descriptions. Thus, the use of linear decoders reduces the space complexity at the decoder since only a subset of the codebooks needs to be stored. The proposed design algorithm locally minimizes the Lagrangian, which is a weighted sum of the expected distortion and of the side quantizers' rates. The algorithm is inspired by the EC-MDSQ design algorithm proposed by Vaishampayan and Domaszewicz, and it is adapted from two to K descriptions. Differently from the aforementioned work, the optimization of the reconstruction values can no longer be performed separately at the decoder optimization step. Interestingly, we show that the problem is a convex quadratic optimization problem, which can be efficiently solved. Moreover, the generalization of the encoder optimization step from two to $K$ descriptions increases drastically the amount of computations. We show how to exploit the special form of the cost function conferred by the linear joint decoders to significantly reduce the time complexity at this step. We compare the performance of the proposed design with multiple description lattice vector quantizers (MDLVQ) and with the multiple description scheme based on successive refinement and unequal erasure protection (UEP). Our experiments show that the proposed approach outperforms MDLVQ with dimension 1 quantization, as expected. Additionally, when more codebooks are added our scheme even beats MDLVQ with quantization dimension approaching, for rates sufficiently high. Furthermore, the proposed approach is also superior to UEP with dimension 1 quantization when the rates are low.