An insect-inspired randomly, weighted neural network with random fourier features for neuro-symbolic relational learning

Jinyung Hong, Theodore P. Pavlic

Research output: Contribution to journalConference articlepeer-review

Abstract

The computer-science field of Knowledge Representation and Reasoning (KRR) aims to understand, reason, and interpret knowledge as efficiently as human beings do. Because many logical formalisms and reasoning methods in the area have shown the capability of higher-order learning, such as abstract concept learning, integrating artificial neural networks (ANNs) with KRR methods for learning complex and practical tasks has received much attention. For example, Neural Tensor Networks (NTNs) are neural-network models capable of transforming symbolic representations into vector spaces where reasoning can be performed through matrix computation; when used in Logic Tensor Networks (LTNs), they are able to embed first-order logic symbols such as constants, facts, and rules into real-valued tensors. The integration of KRR and ANN suggests a potential avenue for bringing biological inspiration from neuroscience into KRR. However, higher-order learning is not exclusive to human brains. Insects, such as fruit flies and honey bees, can solve simple associative learning tasks and learn abstract concepts such as “sameness” and “difference,” which is viewed as a higher-order cognitive function and typically thought to depend on top-down neocortical processing. Empirical research with fruit flies strongly supports that a randomized representational architecture is used in olfactory processing in insect brains. Based on these results, we propose a Randomly Weighted Feature Network (RWFN) that incorporates randomly drawn, untrained weights in a encoder that uses an adapted linear model as a decoder. The randomized projections between input neurons and higher-order processing centers in the input brain is mimicked in RWFN by a single-hidden-layer neural network that specially structures latent representations in the hidden layer using random Fourier features that better represent complex relationships between inputs using kernel approximation. Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model. We compare the performance of RWFNs to LTNs for Semantic Image Interpretation (SII) tasks that have been used as a representative example of how LTNs utilize reasoning over first-order logic to surpass the performance of solely data-driven methods. We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks while using much far fewer learnable parameters (1:62 ratio) and a faster learning process (1:2 ratio of running speed). Furthermore, we show that because the randomized weights do not depend on the data, several decoders can share a single randomized encoder, giving RWFNs a unique economy of spatial scale for simultaneous classification tasks.

Original languageEnglish (US)
Pages (from-to)126-142
Number of pages17
JournalCEUR Workshop Proceedings
Volume2986
StatePublished - 2021
Event15th International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2021 - Virtual, Online
Duration: Oct 25 2021Oct 27 2021

Keywords

  • Insect neuroscience
  • Model architecture
  • Neuro-symbolic computing
  • Randomization

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'An insect-inspired randomly, weighted neural network with random fourier features for neuro-symbolic relational learning'. Together they form a unique fingerprint.

Cite this