In this paper, we propose a new multitask learning (MTL) model which can learn a series of multi-class pattern recognition prob- lems stably. The knowledge transfer in the proposed MTL model is implemented by the following mechanisms: (1) transfer by sharing the internal representation of RBFs and (2) transfer of the information on class subregions from the related tasks. The proposed model can detect task changes on its own based on the output errors even though no task information is given by the environment. It also learn training samples of different tasks that are given one after another. In the experiments, the recognition performance is evaluated for the eight MTPR problems which are defined from the four UCI data sets. The experimental results demonstrate that the proposed MTL model outperforms a single-task learning model in terms of the final classification accuracy. Furthermore, we show that the transfer of class subregion contributes to enhancing the generalization performance of a new task with less training samples.