Abstract
Representations play an essential role in the learning of artificial and biological systems due to their capacity to identify characteristic patterns in the sensory environment. In this work we examined latent representations of several
sets of images, such as basic geometric shapes and handwritten digits, produced by generative models in the process of unsupervised generative learning. A biologically feasible neural network architecture based on bi-directional synaptic connection equivalent in training and processing to a symmetrical autoencoder was proposed and defined. It was demonstrated that conceptual representations with good decoupling of concept regions can be produced with generative models of limited complexity; and that incremental evolution of architecture can result in improved ability to learn data of increasing conceptual complexity, including
realistic images such as handwritten digits. The results
demonstrate potential of conceptual representations produced as a natural platform for conceptual modeling of sensory environments and other intelligent behaviors.