With the rise of AI and robotics, the manufacturing sector is entering Industry 4.0 — a transformative era marked by intelligent systems and autonomous processes. Reinforcement Learning (RL) has emerged as a powerful tool for managing the dynamic and unpredictable nature of manufacturing, optimizing objectives like throughput, resource utilization, and downtime reduction.
However, a major challenge remains: scalability. RL models often fail to adapt when manufacturing systems change, requiring costly and time-consuming retraining. These changes may involve adding or removing workstations, altering machine cycle times, or adjusting buffer capacities. This need for frequent retraining limits the practical application of RL in dynamic manufacturing environments.
To unlock AI’s full potential in manufacturing, a scalable and generalizable RL approach is essential — one that adapts to system changes without the burden of extensive retraining. This would reduce operational overhead, minimize energy consumption, and enhance the resilience of manufacturing systems. In this project, we develop a ‘train small, deploy large’ framework for manufacturing systems, enabling the scaling of a model trained on a 3-workstation manufacturing line to a system with w workstations. The framework accommodates arbitrary cycle times, buffer capacities, and reliability parameters, ensuring adaptability without the need for additional retraining.