Bayesian optimization (BO) has been widely applied to several modern science and engineering applications such as machine learning, neural networks, robotics, aerospace engineering, experimental design. BO has emerged as the modus operandi for global optimization of an arbitrary expensive to evaluate black box function f. Although BO has been very successful in low dimensions, scaling it to high dimensional spaces has been significantly challenging due to its exponentially increasing statistical and computational complexity with increasing dimensions. In this era of high dimensional data where the input features are of million dimensions scaling BO to higher dimensions is one of the important goals in the field. There has been a lot of work in recent years to scale BO to higher dimensions, in many of these methods some underlying structure on the objective function is exploited. In this paper, we review recent efforts in this area. In particular, we focus on the methods that exploit different underlying structures on the objective function to scale BO to high dimensions.