Software-defined networking (SDN) is a promising networking paradigm that can meet the astounding growth of network capacity and stringent quality-of-service requirements demanded by emerging bandwidth-intensive applications. SDN abstracts the control plane from the data plane, which simplifies network resource management, control, and monitoring. In this paper, we take advantage of this unprecedented opportunity of the controller’s computational capability and closed-loop control of the network to leverage recent developments in Deep Reinforcement Learning (DRL) towards routing in SDN. In particular, we have designed and implemented a deep-deterministic policy gradient and transfer learning-based dynamic routing (DDPG-DR) algorithm. The algorithm interacts with the network environment and dynamically optimizes traffic routing in real-time. The algorithm employs transfer learning (TL) techniques to reduce the retraining time of the DRL agent and enhances the algorithm’s scalability. We implement the proposed algorithm on a testbed consisting of a real central controller, Ryu, and a network emulator, Mininet, to evaluate the performance of our proposed algorithm in realistic network scenarios. Our performance evaluation results confirm that the proposed algorithm outperforms the traditional open shortest path first (OSPF) algorithm in terms of delay and throughput. Furthermore, it shows that the adoption of TL leads to faster convergence.