One important aspect in directing cognitive robots or agents is to formally specify what is expected of them. This is often referred to as goal specification. For agents whose actions have deterministic consequences various goal specification languages have been proposed. The situation is different, and less studied, when the actions may have non-deterministic consequences. For example, a simple goal of achieving p has many nuances such as making sure that p is achieved, trying ones best to achieve p, preferring guaranteed achievement of p over possible achievement, and so on. Similarly, there are many nuances in expressing the goal to try to achieve p, and if it fails then to achieve q. We develop an extension of the branching time temporal logic CTL, which we call π-CTL, and show how the above mentioned goals can be expressed using it, and why they can not be expressed in CTL. We compare our approach to an alternative approach proposed in the literature.