AKDENIZ 15th INTERNATIONAL CONFERENCE ON APPLIED SCIENCES, Girne, Kıbrıs (Kktc), 5 - 07 Aralık 2025, (Tam Metin Bildiri)
Mobile robots are robotic systems that can move around their environment using components such as wheels or pallets. Mobile robots have advantages such as being able to quickly adapt to different environments, increasing efficiency by reducing the necessary human labor, and preventing safety issues by being used in dangerous tasks. These features make it suitable for use in many different fields, including industry, agriculture, defense, and the service sector. In this study, tuning of PID controller parameters for trajectory tracking of mobile robots is performed using the Deep Q-Network (DQN) algorithm, one of the reinforcement learning methods. Traditional PID tuning methods often struggle to achieve optimal performance due to their sensitivity to system parameters and operating conditions. In order to address this challenge, the DQN algorithm, capable of model free learning, is employed to automatically optimize PID gain parameters (Kp, Ki, Kd). A mobile robot model with a wheelbase of 0.5 m is utilized, and performance evaluation is conducted on three different reference trajectories. The DQN algorithm determines optimal PID parameters that minimize tracking error through a training process of 50 episodes. Experimental results demonstrate that the DQN-PID approach exhibits successful trajectory tracking performance across all three trajectory types. Satisfactory results are also obtained in terms of control signal smoothness and energy efficiency. The main advantages of the proposed method include model-free learning, adaptation capability to different trajectory types, and elimination of manual tuning requirements. The results indicate that DQN-based PID optimization provides an effective solution for mobile robot applications and can be implemented in industrial environments.