Topic modeling is an important tool in social media analysis, allowing researchers to quickly understand large text corpora by investigating the topics underlying them. One of the fundamental problems of topic models lies in how to assess the quality of the topics from the perspective of human interpretability. How well can humans understand the meaning of topics generated by statistical topic modeling algorithms? In this work we advance the study of this question by introducing Topic Consensus: a new measure that calculates the quality of a topic through investigating its consensus with some known topics underlying the data. We view the quality of the topics from three perspectives: 1) topic interpretability, 2) how documents relate to the underlying topics, and 3) how interpretable the topics are when the corpus has an underlying categorization. We provide insights into how well the results of Mechanical Turk match automated methods for calculating topic quality. The probability distribution of the words in the topic best fit the Topic Coherence measure, in terms of both correlation as well as finding the best topics.