Commercially available wearable brain sensors and devices that convert smartphones into virtual reality systems open up the potential to implement real time collaborative brainmobile interactive applications. These applications may derive psychological contexts using electroencephalogram (EEG) collected in a wireless setting, and provide individualized sensory feedback through devices such as Google Cardboard. Psychological contexts are affected not only by user's own behavior but also by her interaction with the environment and possibly other individuals. Hence, deriving psychological context information not only requires sensing of an individual's brain but also data from her neighbors. Further, the data needs to be processed by computationally intensive machine learning algorithms which may not be executed within desired latency using resource limited mobile devices. In such a scenario, real time computation of psychological contexts and administration of sensory feedback may be infeasible. In this work, we consider the idea of offloading psychological context estimation and sensory feedback computation to volunteer mobile devices and study the feasibility of large scale real-time adhoc brain-mobile interface applications. We present the BraiNet architecture, which can be used to write a custom application to perform computation on brain data and gain group level aggregate inferences and provide feedback. Further, heavy computation related to the brain signal processing can be offloaded to networked mobile devices for adhoc real-time execution without the need for a dedicated server. We show the usage of BraiNet to develop "Neuro Movie" (nMovie) , that modulates movie frames based on individuals subconscious preferences.