The success of machine learning models over the last few years is mostly related to the significant progress of deep neural networks. These powerful and flexible models can even surpass human-level performance in tasks such as image recognition and strategy games. However, experts need to spend considerable time and resources to design the network structure. The demand for new architectures drives interest in automating this design process. Researchers have proposed new algorithms to address the neural architecture search (NAS) problem, including efforts to reduce the high computational cost of such methods. A common approach to improve efficiency is to reduce the search space with the help of expert knowledge, searching for cells rather than entire networks. Motivated by the faster convergence promoted by quantum-inspired evolutionary methods, the Q-NAS algorithm was proposed to address the NAS problem without relying on cell search. In this work, we consolidate Q-NAS, adding a new penalization feature, enhancing its retraining scheme, and also investigating more challenging search spaces than before. In CIFAR-10, we reached 93.85% of test accuracy in 67 GPU days, considering the addition of an early-stopping mechanism. We also applied Q-NAS to CIFAR-100, without modifying the parameters, and our best accuracy was 74.23%, which is comparable to ResNet164. The enhancements and results presented in this work show that Q-NAS can automatically generate network architectures that outperform hand-designed models for CIFAR-10 and CIFAR-100. Also, compared to other NAS methods, Q-NAS results are promising regarding the balance between performance, runtime efficiency, and automation. We believe that our results enrich the discussion on this balance, considering alternatives to the cell search approach.