The issue of mental representation of symbols has plagued cognitive science for decades and still doesn't have a resolution. The basic dispute is between the symbol system hypothesis of artificial intelligence, whose proponents include Newell and Simon , Newell , Smith , Fodor and Pylyshyn  and others, and Smolensky style connectionism  where a "high level" concept or symbol is represented by a number of subconcepts (subsymbols). Both sides claim their representational systems to be at the cognitive level, which means the elements used in their representational systems have meaning. In this paper, we take a closer look at Smolensky style connectionism and find it to be flawed in a number of ways. In particular, the cognitive level subconcepts (subsymbols with meaning) used by Smolensky are inconsistent with a number of principles of human learning. We argue that the subsymbolic distributed representation at the non-cognitive neural layer (McClelland and others , , ) is sufficient to represent a concept or symbol and that an additional layer of cognitive level subconcepts (Smolensky ) is redundant.