Robotic tasks often require multiple manipula- tors to enhance task efficiency and speed, but this increases complexity in terms of collaboration, collision avoidance, and the expanded state-action space. To address these challenges, we propose a multi-level approach combining Reinforcement Learning (RL) and Dynamic Movement Primitives (DMP) to generate adaptive, real-time trajectories for new tasks in dynamic environments using a demonstration library. This method ensures collision-free trajectory generation and ef- ficient collaborative motion planning. We validate the ap- proach through experiments in the PyBullet simulation en- vironment with UR5e robotic manipulators.