Probabilistic methods have been used in image-based rendering for solving the virtual view synthesis problem with Bayesian inference. To work well, the inference process requires the input views to be consistent to yield reasonable result, which in turn constrains the cameras to be very close to each other. Many approaches to relieving such constraint focus on the prior model. In this paper, we present a method which treats the virtual view as the outcome of a spatial motion from one real view. A sequence of images is generated heuristically to preserve textures with the aid of steerable filters. Interim results are further refined with texture-based Markov random field prior model. Experiments show that the synthesized view can have satisfactory image quality with only a few input images from wide baseline cameras.