Home // International Journal On Advances in Security, volume 18, numbers 1 and 2, 2025 // View article
Authors:
Evgenii Ostanin
Nebojsa Djosic
Fatima Hussain
Salah Sharieh
Alexander Ferworn
Keywords: Kolmogorov-Arnold Networks; KAN; MNIST; FGSM; PGD; Classification; Adversarial Training
Abstract:
Kolmogorov–Arnold Networks have emerged as promising architectures thanks to their adaptive activation functions and enhanced interpretability. However, their robustness under adversarial conditions remains underexplored. In this study, we evaluated four variants of Kolmogorov-Arnold Networks, Linear, Fourier, Jacobi, and Chebyshev against Gaussian noise and two gradient-based attacks (the Fast Gradient Sign Method and Projected Gradient Descent). Through detailed comparative analyses and adversarial training experiments with varying mixes of perturbed data, we reveal substantial differences in resilience across variants and relative to a multilayer perceptron baseline. Our results show that targeted adversarial training materially improves robustness under strong adversarial attacks. In particular, including only 5% Fast Gradient Sign Method examples and 5% Projected Gradient Descent examples in the training set restores between 60 and 90 percentage points of accuracy against these attacks. These findings clarify the factors influencing Kolmogorov–Arnold Network robustness and validate adversarial training as a practical hardening strategy for deployment in adversarially challenging environments.
Pages: 77 to 91
Copyright: Copyright (c) to authors, 2025. Used with permission.
Publication date: June 30, 2025
Published in: journal
ISSN: 1942-2636