This study presents a novel solution to agent coalition formation. It focuses on heterogeneous, distributed multiagent systems deployed in real-world environments. Specifically, we study dynamic, uncertain environments in which tasks may evolve during execution, and agents and resource availability may vary rapidly and unpredictably. We refer to cases in which agent collaboration is needed for efficient task execution, i.e., stable coalition formation is required. In our context, dynamics and uncertainty prohibit computation of coalition stability ahead of task execution. We nevertheless, seek stable, efficient and decentralized coalition formation. Combining methods from game theory, Markov decision processes and probability, we introduce an autostabilizing, core stable, coalition formation mechanism. The mechanism arrives at stability, maximizes social welfare, and converges gradually to required coalitions.