Social interactions mediate our communication with others, enable development and maintenance of personal and professional relationships, and contribute greatly to our health. While both verbal cues (i.e., speech) and non-verbal cues (e.g., facial expressions, hand gestures, and body language) are exchanged during social interactions, the latter encompasses more information (~65%). Given their inherent visual nature, non-verbal cues are largely inaccessible to individuals who are blind, putting this population at a social disadvantage compared to their sighted peers. For individuals who are blind, embarrassing social situations are not uncommon due to miscommunication, which can lead to social avoidance and isolation. In this paper, we propose a mapping between visual facial expressions, represented as facial action units, which may be extracted using computer vision algorithms, to haptic (vibrotactile) representations, toward discreet and real-time perception of facial expressions during social interactions by individuals who are blind.