Human-machine communication for assistive IoT technologies

Alexandra Porter, Md Muztoba, Umit Ogras

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term 'smart' is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application.

Original languageEnglish (US)
Title of host publication2016 International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781450330503
DOIs
StatePublished - Nov 21 2016
Event2016 International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2016 - Pittsburgh, United States
Duration: Oct 2 2016Oct 7 2016

Other

Other2016 International Conference on Hardware/Software Codesign and System Synthesis, CODES+ISSS 2016
Country/TerritoryUnited States
CityPittsburgh
Period10/2/1610/7/16

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Human-machine communication for assistive IoT technologies'. Together they form a unique fingerprint.

Cite this