Abstract
This paper introduces the Optimized Interpretable Kolmogorov-Arnold Network (OIKAN), a novel hybrid architecture that combines the predictive power of multilayer perceptrons with the interpretability of Kolmogorov-Arnold Networks. While traditional deep learning models achieve high accuracy at the expense of transparency, OIKAN addresses this trade-off through advanced basis expansions, feature interaction modeling, dimensionality reduction via SVD, and targeted regularization strategies. Our approach leverages the Kolmogorov-Arnold Representation Theorem to decompose complex multivariate functions into hierarchical univariate transformations, enhancing model explainability while maintaining competitive performance. Experimental results on classification (Titanic survival prediction), regression (housing prices), and function approximation (Lagrangian pendulum) tasks demonstrate OIKAN's effectiveness across diverse problem domains. The model provides symbolic representations and visualizations that facilitate scientific understanding and regulatory compliance. As an open-source implementation, OIKAN represents a significant advancement in developing machine learning models that balance predictive accuracy with interpretability, meeting the growing demand for transparent AI systems in scientific and industrial applications.
Supplementary weblinks
Title
GitHub Repository - OIKAN
Description
Optimized Interpretable Kolmogorov-Arnold Networks (OIKAN)
A deep learning framework for interpretable neural networks using advanced basis functions.
Actions
View