Navigating autonomously is a crucial capability for modern robots, as navigating in unknown and constantly altering environments is not an easy task. A powerful reinforcement learning method known as Deep Q-Learning (DQL) has recently been introduced for decision-making in robotic navigation; nevertheless, it is known to deliver subpar results due to its overreliance on hyperparameters, including learning rate, discount factor, exploration strategy, and even network architecture. This paper deals with the issue of hyperparameter optimization tuning via Bayesian Optimization (BO) on a DQN (Deep Q Network) focusing on real-time navigation tasks attempting to enhance convergence speed, sample efficiency, and generalization. We present a new BO-DQN framework, which uses \textit{Gaussian process} surrogate models with the UCB acquisition function, which allows for better refinement during later iterations of DQN training. An array of simulations and real-world robotic environments were tested, with results indicating that the BO-tuned models... outperform both grid and random search baselines with a greater rate of convergence, increased stability, and improved accuracy on the tasks. Furthermore, the proposed method performed remarkably well when exposed to noisy sensors and changing task complexities. This work offers a practical approach to deploying DQN policies on mobile robots working in uncertain conditions due to its low sample requirement and its real-time responses.