Decoding human intent using a wearable system and multi-modal sensor data

Cemil S. Geyik, Arindam Dutta, Umit Ogras, Daniel Bliss

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Despite the phenomenal advances in the computational power of electronic systems, human-machine interaction has been largely limited to simple control panels, such as keyboards and mice, which only use physical senses. Consequently, these systems either rely critically on close human guidance or operate almost independently. A richer experience can be achieved if cognitive inputs are used in addition to the physical senses. Towards this end, this paper introduces a simple wearable system that consists of a motion processing unit and brain-machine interface. We show that our system can successfully employ cognitive indicators to predict human activity.

Original languageEnglish (US)
Title of host publicationConference Record of the 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Pages846-850
Number of pages5
ISBN (Electronic)9781538639542
DOIs
StatePublished - Mar 1 2017
Event50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 - Pacific Grove, United States
Duration: Nov 6 2016Nov 9 2016

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
ISSN (Print)1058-6393

Other

Other50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
Country/TerritoryUnited States
CityPacific Grove
Period11/6/1611/9/16

ASJC Scopus subject areas

  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Decoding human intent using a wearable system and multi-modal sensor data'. Together they form a unique fingerprint.

Cite this