Abstract
In this work connections between training processes of unsupervised generative learning with self-encoding and regeneration and information structure in the latent representations created by such models were investigated. Theoretical arguments were proposed leading to the conclusion, confirmed by previously published experimental results, that in generative self-learning under certain constraints latent representations with spontaneous categorization are statistically preferred. The results can provide insights into common principles underlying learning and emergence of intelligence in machine and biologic systems.