Effectiveness study of learning algorithms in supply chains of a dairy business

##plugins.themes.bootstrap3.article.main##

Erika P. Arellano-Cruz
Albino Martínez-Sibaja
José P. Rodríguez-Jarquin
Rubén Posada-Gómez
Angélica M. Bello-Ramírez
Juan C. Núñez-Dorantes

Keywords

Reinforcement learning; Deep learning; supply chain; inventory control.

Resumen

Objective: To develop a control system to prevent over-response of the supply chain of a dairy business.


Methodology: The following methods were used: DQN, Double DQN, Dueling DQN, and Dueling Double DQN to determine the distribution of demand: normal and uniform.


Results: Results were calculated based on stability in learning (the last 10,000 episodes). It was observed that the means of DQN and DDQN were very similar. To validate whether the performance of the Dueling DQN algorithm is better than that of the DQN algorithm, a non-parametric test was performed to compare the mean rank of two related samples and to determine if there are differences between them. The p values ​​were 5.83e−38 and 0.000 for the Normal and Uniform distributions, respectively.


Conclusions: The algorithm with the best results is Dueling DQN, with an average total cost of 151.27 units for the demand with a normal distribution and an average of 155.3 units for the demand with a uniform distribution. This method has less variability once convergence is achieved

Abstract 66 | EARLY ACCESS 16 Downloads 0

Artículos más leídos del mismo autor/a