In our previous study, we propose a multi-agent reinforcement learning framework for the feature selection problem. Specifically, we first reformulate feature selection with a reinforcement learning framework by regarding each feature as an agent. Besides, we obtain the state of the environment in three ways, i.e., statistic description, autoencoder, and graph convolutional network (GCN), in order to derive a fixed-length state representation as the input of reinforcement learning. In addition, we study how the coordination among feature agents can be improved by a more effective reward scheme. Also, we provide a GMM-based generative rectified sampling strategy to accelerate the convergence of multi-agent reinforcement learning. Our method searches the feature subset space globally and can be easily adapted to real-time scenarios due to the nature of reinforcement learning. In the extended version, we further accelerate the framework from two aspects. Specifically, from the sampling aspect, we show the indirect acceleration by proposing a rank-based softmax sampling strategy, and from the exploration aspect, we show the direct acceleration by proposing an interactive reinforcement learning (IRL)-based exploration strategy. Extensive experimental results show the significant improvement of the proposed method over conventional approaches.