TY - JOUR
T1 - An insect-inspired randomly, weighted neural network with random fourier features for neuro-symbolic relational learning
AU - Hong, Jinyung
AU - Pavlic, Theodore P.
N1 - Funding Information:
This work was supported in part by NSF SES-1735579.
Publisher Copyright:
© CEUR Workshop Proceedings 2021.
PY - 2021
Y1 - 2021
N2 - The computer-science field of Knowledge Representation and Reasoning (KRR) aims to understand, reason, and interpret knowledge as efficiently as human beings do. Because many logical formalisms and reasoning methods in the area have shown the capability of higher-order learning, such as abstract concept learning, integrating artificial neural networks (ANNs) with KRR methods for learning complex and practical tasks has received much attention. For example, Neural Tensor Networks (NTNs) are neural-network models capable of transforming symbolic representations into vector spaces where reasoning can be performed through matrix computation; when used in Logic Tensor Networks (LTNs), they are able to embed first-order logic symbols such as constants, facts, and rules into real-valued tensors. The integration of KRR and ANN suggests a potential avenue for bringing biological inspiration from neuroscience into KRR. However, higher-order learning is not exclusive to human brains. Insects, such as fruit flies and honey bees, can solve simple associative learning tasks and learn abstract concepts such as “sameness” and “difference,” which is viewed as a higher-order cognitive function and typically thought to depend on top-down neocortical processing. Empirical research with fruit flies strongly supports that a randomized representational architecture is used in olfactory processing in insect brains. Based on these results, we propose a Randomly Weighted Feature Network (RWFN) that incorporates randomly drawn, untrained weights in a encoder that uses an adapted linear model as a decoder. The randomized projections between input neurons and higher-order processing centers in the input brain is mimicked in RWFN by a single-hidden-layer neural network that specially structures latent representations in the hidden layer using random Fourier features that better represent complex relationships between inputs using kernel approximation. Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model. We compare the performance of RWFNs to LTNs for Semantic Image Interpretation (SII) tasks that have been used as a representative example of how LTNs utilize reasoning over first-order logic to surpass the performance of solely data-driven methods. We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks while using much far fewer learnable parameters (1:62 ratio) and a faster learning process (1:2 ratio of running speed). Furthermore, we show that because the randomized weights do not depend on the data, several decoders can share a single randomized encoder, giving RWFNs a unique economy of spatial scale for simultaneous classification tasks.
AB - The computer-science field of Knowledge Representation and Reasoning (KRR) aims to understand, reason, and interpret knowledge as efficiently as human beings do. Because many logical formalisms and reasoning methods in the area have shown the capability of higher-order learning, such as abstract concept learning, integrating artificial neural networks (ANNs) with KRR methods for learning complex and practical tasks has received much attention. For example, Neural Tensor Networks (NTNs) are neural-network models capable of transforming symbolic representations into vector spaces where reasoning can be performed through matrix computation; when used in Logic Tensor Networks (LTNs), they are able to embed first-order logic symbols such as constants, facts, and rules into real-valued tensors. The integration of KRR and ANN suggests a potential avenue for bringing biological inspiration from neuroscience into KRR. However, higher-order learning is not exclusive to human brains. Insects, such as fruit flies and honey bees, can solve simple associative learning tasks and learn abstract concepts such as “sameness” and “difference,” which is viewed as a higher-order cognitive function and typically thought to depend on top-down neocortical processing. Empirical research with fruit flies strongly supports that a randomized representational architecture is used in olfactory processing in insect brains. Based on these results, we propose a Randomly Weighted Feature Network (RWFN) that incorporates randomly drawn, untrained weights in a encoder that uses an adapted linear model as a decoder. The randomized projections between input neurons and higher-order processing centers in the input brain is mimicked in RWFN by a single-hidden-layer neural network that specially structures latent representations in the hidden layer using random Fourier features that better represent complex relationships between inputs using kernel approximation. Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model. We compare the performance of RWFNs to LTNs for Semantic Image Interpretation (SII) tasks that have been used as a representative example of how LTNs utilize reasoning over first-order logic to surpass the performance of solely data-driven methods. We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks while using much far fewer learnable parameters (1:62 ratio) and a faster learning process (1:2 ratio of running speed). Furthermore, we show that because the randomized weights do not depend on the data, several decoders can share a single randomized encoder, giving RWFNs a unique economy of spatial scale for simultaneous classification tasks.
KW - Insect neuroscience
KW - Model architecture
KW - Neuro-symbolic computing
KW - Randomization
UR - http://www.scopus.com/inward/record.url?scp=85118253990&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118253990&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85118253990
SN - 1613-0073
VL - 2986
SP - 126
EP - 142
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 15th International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2021
Y2 - 25 October 2021 through 27 October 2021
ER -