A new representation technique for the planning of decisions under uncertainty is presented. The technique, called Markov decision networks (MDNs), solves planning tasks that cannot or can hardly be solved by any other known representation technique, such as decision trees, influence diagrams, and POMDPs. We show that linear MDNs are POMDPs. Hence, the incremental-pruning algorithm for POMDPs is applicable to this subclass of MDNs. It turns out that the time complexity of the algorithm is lower for MDNs than for general POMDPs, due to the division of the state space into state groups. An example shows the practical benefit of our findings: an optimal strategy can be computed in a few seconds.