IEEE Access, 2026 (SCI-Expanded, Scopus)
Activation functions are fundamental components of deep neural networks, providing the nonlinear transformations that enable complex representation learning and strongly influence model performance in deep learning-based artificial intelligence systems. However, despite their proven importance, commonly used activation functions often suffer from critical limitations such as vanishing or exploding gradients, non-differentiability at zero, and instability during training. These limitations become particularly relevant in image classification and Human Activity Recognition (HAR) tasks, where complex spatial or spatiotemporal dependencies, along with noisy data, can adversely affect training stability. To address these limitations, we propose a novel activation function, ExpAbsTanH, which integrates self-gating with exponential control and smooth nonlinearity. To assess its theoretical robustness and stability, initial experiments were conducted on the MNIST and CIFAR-10 datasets, demonstrating that ExpAbsTanH consistently delivers strong classification performance across diverse deep learning architectures. ExpAbsTanH was found to outperform ReLU and Swish in most cases, achieving performance comparable to that of Mish. When integrated into ResNet-200, ExpAbsTanH achieved a Top-1 accuracy of 90.08% on CIFAR-10, highlighting its stability and effectiveness in deep convolutional networks. Beyond static images, the robustness of ExpAbsTanH for HAR was evaluated on the UCF-50 and UCF-101 datasets. For temporally complex tasks, a hybrid model combining ResNet-50 with a single-layer Long Short-Term Memory (LSTM) network and an attention module was employed. On UCF-50, the ExpAbsTanH-based model consistently outperformed the ReLU-based model, achieving Top-1 and Top-5 accuracies of 95.69% and 98.74%, respectively. To further assess its generalization capability, experiments conducted on randomly selected subsets of the UCF-101 dataset demonstrated the highest Top-1 and Top-5 accuracies of 99.47% and 99.79%, respectively. Under noisy conditions, the ExpAbsTanH-based model achieved a Top-1 accuracy of 95.51% on the UCF-50 dataset, consistently surpassing the ReLU-based model. On the UCF-101 dataset, this integration maintained a Top-1 accuracy of 98.63%, demonstrating its robustness against input distortions. Gradient-weighted Class Activation Mapping (Grad-CAM) analysis further revealed that ExpAbsTanH focused on semantically meaningful and spatially coherent features, whereas ReLU occasionally emphasized irrelevant regions. ExpAbsTanH has become a robust and effective alternative activation function, improving the reliability of real-world deep learning systems by delivering high performance across image and video recognition tasks.