Human-centered Internet-of-Things (IoT) applications utilize computational algorithms such as machine learning and signal processing techniques to infer knowledge about important events such as physical activities and medical complications. The inference is typically based on data collected with wearable sensors or those embedded in the environment. A major obstacle in large-scale utilization of these systems is that the computational algorithms cannot be shared between users or reused in contexts different than the setting in which the training data are collected. For example, an activity recognition algorithm trained for a wrist-band sensor cannot be used on a smartphone worn on the waist. We propose an approach for automatic detection of physical sensor-contexts (e.g., on-body sensor location) without need for collecting new labeled training data. Our techniques enable system designers and end-users to share and reuse computational algorithms that are trained under different contexts and data collection settings. We develop a framework to autonomously identify sensor-context. We propose a gating function to automatically activate the most accurate computational algorithm among a set of shared expert models. Our analysis based on real data collected with human subjects while performing 12 physical activities demonstrate that the accuracy of our multi-view learning is only 7.9% less than the experimental upper bound for activity recognition using a dynamic sensor constantly migrating from one on-body location to another. We also compare our approach with several mixture-of-experts models and transfer learning techniques and demonstrate that our approach outperforms algorithms in both categories.