Operation of a variety of natural or man-made systems subject to uncer- tainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control strategy may be not known and should rather be reconstructed by joint observation of the ap- plied control actions and the corresponding evolution of the system state. This is largely hurdled by limitations in sensing the state of the system and different levels of noise. We address the problem of optimal selection of control actions for a stochastic system with unknown dynamics operating under a controller with an unknown strategy, for which we can observe trajectories made of the sequence of control actions and noisy observations of the system state which are labeled by the ex- act value of some reward functions. To this end, we present an approach to train an input-output hidden Markov model (IO-HMM) as a generative stochastic model that describes the state dy- namics of a POMDP by the application of a novel optimization objective adopted from the literature. The learning task is hurdled by two restrictions: the only available sensed data is the limited number of trajectories of applied actions, noisy observations of the system state, and system state; and, the high failure costs prevent interaction with the online environment, preventing exploratory testing. Traditionally, stochastic generative models have been used to learn the dynamics of the underlying system and select appropriate actions in the defined task. However, current state-of-the-art techniques, in which the state dynamics of the POMDP is first learned and then strategies are optimized over it, fre- quently fail because the model that best fits the data may not be well suited for controlling. Using the aforementioned optimization objective, we try to tackle the problems related to model misspecification. The proposed methodology is illustrated in a scenario of failure avoidance for a multi-component system. The quality of decision-making is evaluated using the collected reward from the test data and compared with the standard approach of the previous literature.
Machine Learning methods for quantitative evaluation / Mohammad amin zadenoori. - (2024).
Machine Learning methods for quantitative evaluation
Mohammad amin zadenoori
2024
Abstract
Operation of a variety of natural or man-made systems subject to uncer- tainty is maintained within a range of safe behavior through run-time sensing of the system state and control actions selected according to some strategy. When the system is observed from an external perspective, the control strategy may be not known and should rather be reconstructed by joint observation of the ap- plied control actions and the corresponding evolution of the system state. This is largely hurdled by limitations in sensing the state of the system and different levels of noise. We address the problem of optimal selection of control actions for a stochastic system with unknown dynamics operating under a controller with an unknown strategy, for which we can observe trajectories made of the sequence of control actions and noisy observations of the system state which are labeled by the ex- act value of some reward functions. To this end, we present an approach to train an input-output hidden Markov model (IO-HMM) as a generative stochastic model that describes the state dy- namics of a POMDP by the application of a novel optimization objective adopted from the literature. The learning task is hurdled by two restrictions: the only available sensed data is the limited number of trajectories of applied actions, noisy observations of the system state, and system state; and, the high failure costs prevent interaction with the online environment, preventing exploratory testing. Traditionally, stochastic generative models have been used to learn the dynamics of the underlying system and select appropriate actions in the defined task. However, current state-of-the-art techniques, in which the state dynamics of the POMDP is first learned and then strategies are optimized over it, fre- quently fail because the model that best fits the data may not be well suited for controlling. Using the aforementioned optimization objective, we try to tackle the problems related to model misspecification. The proposed methodology is illustrated in a scenario of failure avoidance for a multi-component system. The quality of decision-making is evaluated using the collected reward from the test data and compared with the standard approach of the previous literature.| File | Dimensione | Formato | |
|---|---|---|---|
|
Thesis_.pdf
accesso aperto
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Open Access
Dimensione
1.04 MB
Formato
Adobe PDF
|
1.04 MB | Adobe PDF |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



