We consider the problem of risk-aware Markov Decision Processes (MDPs) in the context of Safe AI. We introduce a novel theoretical framework - Extended Markov Ratio Decision Processes (EM- RDP) – that incorporates risk into MDPs as well as embeds environment learning. We propose an al- gorithm to find the optimal policy for EMRDP with theoretical guarantees. Under a certain monotonicity assumption, this algorithm runs in strongly-polynomial time both in the discounted and expected average reward models. We validate our algorithm empirically on the Grid World benchmark, evaluating its solution quality, required number of steps, and numerical stability. We find its solution quality to be stable under data noising, while its required number of steps grows with added noise. We observe its numerical stability compared to global methods.