Alzheimer’s disease (AD) is the most common type of dementia. Identifying biomarkers that can track AD at early stages is crucial for therapy to be successful. Many researchers have developed models to predict cognitive impairments by employing valuable longitudinal imaging information along the progression of the disease. However, previous methods model the problem either in the isolated single-task mode or multi-task batch mode, which ignores the fact that the longitudinal data always arrive in a continuous time sequence and, in reality, there are rich types of longitudinal data to apply our learned model to. To this end, we continually model the AD progression in time sequence via a proposed novel Deep Multi-order Preserving Weight Consolidation (DMoPWC) to simultaneously (1) discover the inter and inner relations among different cognitive measures at different time points and utilize such relations to enhance the learning of associations between imaging features and clinical scores; (2) continually learn new longitudinal patients’ images to overcome forgetting the previously learned knowledge without access to the old data. Moreover, inspired by recent breakthroughs of Recurrent Neural Network, we consider time-order knowledge to further reinforce the statistical power of DMoPWC and ensure features at a particular time will be temporally ahead of the features at its subsequential times. Empirical studies on the longitudinal brain image dataset demonstrate that DMoPWC achieves superior performance over other AD prognosis algorithms.