Prior work on generating explanations in a planning context has focused on providing the rationale behind an AI agent's decision-making. While these methods offer the right explanations, they fail to heed the cognitive requirement of understanding an explanation from the explainee or human's perspective. In this work, we set out to address this issue by considering the order for communicating information in an explanation, or the progressiveness of making explanations. Progression is the notion of building complex concepts on simpler ones, which is known to benefit learning. In this work, we investigate a similar effect when an explanation is composed of multiple parts that are communicated sequentially. The challenge here lies in determining the order for receiving different parts of an explanation that would assist in understanding. Given the sequential nature, a formulation based on goal-based MDP is presented. The reward function of this MDP is learned via inverse reinforcement learning based on training data. We evaluated our approach in an escape-room domain to demonstrate its effectiveness. Upon analyzing the results, it revealed that the desired order arises strongly from both domain-dependent and independence features. This result confirmed our expectation that the process of understanding an explanation for planning tasks was progressive and context dependent. We also showed that the explanations generated using the learned rewards achieved better task performance and simultaneously reduced cognitive load. These results shed light on designing explainable robots across various domains.