Intelligent robots are redefining autonomous tasks but are still far from being fully capable of assisting humans in day to day tasks. An important requirement of collaboration is to have a clear understanding of each other's expectations and capabilities. Lack of which may lead to serious issues such as loose coordination between teammates, ineffective team performance, and ultimately mission failures. Hence, it is important for the robot to behave explicably to make themselves understandable to the human. One of the challenges here is that the expectations of the human are often hidden and dynamically changing as the human interacts with the robot. Existing approaches in plan explicability often assume the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to address this issue. We apply a Bayesian approach to model and predict dynamic human beliefs to be more anticipatory, and hence can generate more efficient plans without impacting explicability. We hypothesize that active explicable plans can be more efficient and more explicable at the same time, compared to the plans generated by existing methods. From the preliminary results of Mturk study, we find that our approach effectively captures the dynamic belief of the human which can be used to generate efficient and explicable behavior that benefits from dynamically changing expectations.