TY - GEN
T1 - Is the connectionist notion of subconcepts flawed?
AU - Roy, Asim
PY - 2010
Y1 - 2010
N2 - The issue of mental representation of symbols has plagued cognitive science for decades and still doesn't have a resolution. The basic dispute is between the symbol system hypothesis of artificial intelligence, whose proponents include Newell and Simon [1], Newell [2], Smith [3], Fodor and Pylyshyn [4] and others, and Smolensky style connectionism [5] where a "high level" concept or symbol is represented by a number of subconcepts (subsymbols). Both sides claim their representational systems to be at the cognitive level, which means the elements used in their representational systems have meaning. In this paper, we take a closer look at Smolensky style connectionism and find it to be flawed in a number of ways. In particular, the cognitive level subconcepts (subsymbols with meaning) used by Smolensky are inconsistent with a number of principles of human learning. We argue that the subsymbolic distributed representation at the non-cognitive neural layer (McClelland and others [6], [7], [8]) is sufficient to represent a concept or symbol and that an additional layer of cognitive level subconcepts (Smolensky [5]) is redundant.
AB - The issue of mental representation of symbols has plagued cognitive science for decades and still doesn't have a resolution. The basic dispute is between the symbol system hypothesis of artificial intelligence, whose proponents include Newell and Simon [1], Newell [2], Smith [3], Fodor and Pylyshyn [4] and others, and Smolensky style connectionism [5] where a "high level" concept or symbol is represented by a number of subconcepts (subsymbols). Both sides claim their representational systems to be at the cognitive level, which means the elements used in their representational systems have meaning. In this paper, we take a closer look at Smolensky style connectionism and find it to be flawed in a number of ways. In particular, the cognitive level subconcepts (subsymbols with meaning) used by Smolensky are inconsistent with a number of principles of human learning. We argue that the subsymbolic distributed representation at the non-cognitive neural layer (McClelland and others [6], [7], [8]) is sufficient to represent a concept or symbol and that an additional layer of cognitive level subconcepts (Smolensky [5]) is redundant.
UR - http://www.scopus.com/inward/record.url?scp=79959469482&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79959469482&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2010.5596954
DO - 10.1109/IJCNN.2010.5596954
M3 - Conference contribution
AN - SCOPUS:79959469482
SN - 9781424469178
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
T2 - 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
Y2 - 18 July 2010 through 23 July 2010
ER -