Stock management in a hospital requires the achievement of a trade-off between conflicting criteria, with mandatory requirements on the quality of patient care as well as on purchasing and logistics costs. We address daily drug ordering in a ward of an Italian public hospital, where patient admission/discharge and drug consumption during the sojourn are subject to uncertainty. To derive optimal control policies minimizing the overall purchasing and stocking cost while avoiding drug shortages, the problem is modeled as a Markov Decision Process (MDP), fitting the statistics of hospitalization time and drug consumption through a discrete phase-type (DPH) distribution or a Hidden Markov Model (HMM). A planning algorithm that operates at run-time iteratively synthesizes and solves the MDP over a finite horizon, applies the first action of the best policy found, and then moves the horizon forward by one day. Experiments show the convenience of the proposed approach with respect to baseline inventory management policies.

Hospital Inventory Management Through Markov Decision Processes @runtime / Marco Biagi, Laura Carnevali, Francesco Santoni, Enrico Vicario. - ELETTRONICO. - 11024:(2018), pp. 87-103. (Intervento presentato al convegno International Conference on Quantitative Evaluation of SysTems (QEST)) [10.1007/978-3-319-99154-2_6].

Hospital Inventory Management Through Markov Decision Processes @runtime

Marco Biagi;Laura Carnevali;SANTONI, FRANCESCO;Enrico Vicario
2018

Abstract

Stock management in a hospital requires the achievement of a trade-off between conflicting criteria, with mandatory requirements on the quality of patient care as well as on purchasing and logistics costs. We address daily drug ordering in a ward of an Italian public hospital, where patient admission/discharge and drug consumption during the sojourn are subject to uncertainty. To derive optimal control policies minimizing the overall purchasing and stocking cost while avoiding drug shortages, the problem is modeled as a Markov Decision Process (MDP), fitting the statistics of hospitalization time and drug consumption through a discrete phase-type (DPH) distribution or a Hidden Markov Model (HMM). A planning algorithm that operates at run-time iteratively synthesizes and solves the MDP over a finite horizon, applies the first action of the best policy found, and then moves the horizon forward by one day. Experiments show the convenience of the proposed approach with respect to baseline inventory management policies.
2018
International Conference on Quantitative Evaluation of SysTems (QEST) 2018
International Conference on Quantitative Evaluation of SysTems (QEST)
Marco Biagi, Laura Carnevali, Francesco Santoni, Enrico Vicario
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1149477
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 2
social impact