Abstract

Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.

Original languageEnglish (US)
Title of host publicationSmart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers
EditorsStefano Berretti, Anup Basu
PublisherSpringer Verlag
Pages3-14
Number of pages12
ISBN (Print)9783030043742
DOIs
StatePublished - Jan 1 2018
Event1st International Conference on Smart Multimedia, ICSM 2018 - Toulon, France
Duration: Aug 24 2018Aug 26 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11010 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other1st International Conference on Smart Multimedia, ICSM 2018
CountryFrance
CityToulon
Period8/24/188/26/18

Fingerprint

Social Interaction
Facial Expression
Computer vision
Health
Unit
Communication
Haptics
Gesture
Computer Vision
Isolation
Maintenance
Real-time
Vision

Keywords

  • Assistive technology
  • Facial action units
  • Sensory substitution
  • Social assistive aids
  • Visual-to-tactile mapping

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

McDaniel, T., Devkota, S., Tadayon, R., Duarte, B., Fakhri, B., & Panchanathan, S. (2018). Tactile facial action units toward enriching social interactions for individuals who are blind. In S. Berretti, & A. Basu (Eds.), Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers (pp. 3-14). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11010 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-04375-9_1

Tactile facial action units toward enriching social interactions for individuals who are blind. / McDaniel, Troy; Devkota, Samjhana; Tadayon, Ramin; Duarte, Bryan; Fakhri, Bijan; Panchanathan, Sethuraman.

Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers. ed. / Stefano Berretti; Anup Basu. Springer Verlag, 2018. p. 3-14 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11010 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

McDaniel, T, Devkota, S, Tadayon, R, Duarte, B, Fakhri, B & Panchanathan, S 2018, Tactile facial action units toward enriching social interactions for individuals who are blind. in S Berretti & A Basu (eds), Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11010 LNCS, Springer Verlag, pp. 3-14, 1st International Conference on Smart Multimedia, ICSM 2018, Toulon, France, 8/24/18. https://doi.org/10.1007/978-3-030-04375-9_1
McDaniel T, Devkota S, Tadayon R, Duarte B, Fakhri B, Panchanathan S. Tactile facial action units toward enriching social interactions for individuals who are blind. In Berretti S, Basu A, editors, Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers. Springer Verlag. 2018. p. 3-14. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-04375-9_1
McDaniel, Troy ; Devkota, Samjhana ; Tadayon, Ramin ; Duarte, Bryan ; Fakhri, Bijan ; Panchanathan, Sethuraman. / Tactile facial action units toward enriching social interactions for individuals who are blind. Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers. editor / Stefano Berretti ; Anup Basu. Springer Verlag, 2018. pp. 3-14 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{4d695db5871d4f3f93333e9c8d9b5358,
title = "Tactile facial action units toward enriching social interactions for individuals who are blind",
abstract = "Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65{\%}). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.",
keywords = "Assistive technology, Facial action units, Sensory substitution, Social assistive aids, Visual-to-tactile mapping",
author = "Troy McDaniel and Samjhana Devkota and Ramin Tadayon and Bryan Duarte and Bijan Fakhri and Sethuraman Panchanathan",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-04375-9_1",
language = "English (US)",
isbn = "9783030043742",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "3--14",
editor = "Stefano Berretti and Anup Basu",
booktitle = "Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers",

}

TY - GEN

T1 - Tactile facial action units toward enriching social interactions for individuals who are blind

AU - McDaniel, Troy

AU - Devkota, Samjhana

AU - Tadayon, Ramin

AU - Duarte, Bryan

AU - Fakhri, Bijan

AU - Panchanathan, Sethuraman

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.

AB - Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.

KW - Assistive technology

KW - Facial action units

KW - Sensory substitution

KW - Social assistive aids

KW - Visual-to-tactile mapping

UR - http://www.scopus.com/inward/record.url?scp=85058511419&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058511419&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-04375-9_1

DO - 10.1007/978-3-030-04375-9_1

M3 - Conference contribution

AN - SCOPUS:85058511419

SN - 9783030043742

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 3

EP - 14

BT - Smart Multimedia - 1st International Conference, ICSM 2018, Revised Selected Papers

A2 - Berretti, Stefano

A2 - Basu, Anup

PB - Springer Verlag

ER -