Recent works in the development of deep adaptation networks have yielded progressive improvement on unsupervised domain adaptive classification tasks by reducing the distribution discrepancy between source and target domains. In parallel, the unification of dominant semi-supervised learning techniques has illustrated the unprecedented potential for utilizing unlabeled data to train a classification model in defiance of a discouragingly meager labeled dataset. In this paper, we propose Domain Adaptive Fusion (DAF), a novel domain adaptation algorithm that encourages a domain-invariant linear relationship between the pixel-space of different domains and the prediction-space while being trained under a domain adversarial signal. The thoughtful combination of key components in unsupervised domain adaptation and semi-supervised learning enable DAF to effectively bridge the gap between source and target domains. Experiments performed on computer vision benchmark datasets for domain adaptation endorse the efficacy of our hybrid approach, outperforming all of the baseline architectures on most of the transfer tasks.